question
stringlengths
15
150
question_id
int64
258k
2.12M
score
int64
1
208
link
stringlengths
60
131
body
stringlengths
101
9.72k
answers
listlengths
1
288
verbolised
stringlengths
650
683k
Prove $k+1>k $ where $k$ is an integer
2,108,169
9
https://math.stackexchange.com/questions/2108169/prove-k1k-where-k-is-an-integer
<p>I'm being asked to prove that if $k∈ \Bbb Z$, $k+1&gt;k$. Judging from our instructions, it appears (I am unsure) as though I cannot use the law of induction to solve this. A hint gives that the proof depends on $1∈\Bbb N$. I was thinking of approaching this by using if $k∈\Bbb N$ then $x+k∈\Bbb N$. Problem is, we were not given $k∈\Bbb N.$ Is this solvable without induction?</p> <p>We are given associativity, commutivity, and the identity elements for addition and multiplication, the additive inverse, the properties of $=$, and basic set theory and logic.</p> <p>Order for the integers is given by Let $m,n,p∈\Bbb Z$. If $m&lt;n$ and $n&lt;p$, $m&lt;p$.</p> <p>Also assumed, and possibly relevant is if $x∈\Bbb Z$, then $x∈\Bbb N$ or $-x∈ \Bbb N$ or $x=0$. </p>
[ { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" }, { "answer_id": 2108173, "score": 12, "text": "<p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $\"a&gt;b\"$. But under most axiomatizations, assuming that $\"a&gt;b\"$ is defined as $\"(a-b)\\in \\mathbb{N}\"$ (where $\\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p>\n\n<p>$[(k+1)\\ngtr k]~~~ \\equiv~~~~ [(k+1)-k \\notin \\mathbb{N}]~~~$(definition of $a&gt;b$)</p>\n\n<p>$\\implies~~~ [(1+k)-k \\notin \\mathbb{N}]~~~$ (commutativity of addition)</p>\n\n<p>$\\implies~~~[1+(k-k) \\notin \\mathbb{N}]~~~$ (associativity of addition) </p>\n\n<p>$\\implies~~~[1\\notin \\mathbb{N}]$</p>\n\n<p>Note that $1\\in\\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has \"already\" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\\in\\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href=\"https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf\" rel=\"nofollow noreferrer\">one here</a>, that I particularly like):</p>\n\n<ol>\n<li>Prove that for any integer $a$ we have $a\\cdot 0=0$. Proof <em>sketch</em>: $0=a\\cdot 0 - a\\cdot 0$ $= a\\cdot (0+0) - a\\cdot 0$ $=a\\cdot 0$.</li>\n<li>Prove that for any integer $a$ we have $-a=a\\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \\cdot (1+(-1)) = a+ a\\cdot(-1)$.</li>\n<li>Combine the above with the axiom of closure of $\\mathbb{N}$ under multiplication to obtain $-1\\notin\\mathbb{N}$.</li>\n<li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\\in\\mathbb{N}$, or its additive inverse $-a\\in\\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\\neq 0$ since otherwise for one $a\\in\\mathbb{N}$ we'd have $a=a\\cdot 1= a\\cdot 0=0$.</li>\n<li>Exploit the trichotomy axiom again to conclude that since $1\\neq 0$ and $-1\\notin\\mathbb{N}$, then $1\\in\\mathbb{N}$.</li>\n</ol>\n" }, { "answer_id": 2108182, "score": 8, "text": "<p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \\in \\mathbb{Z}$,\n$$\n\\mbox{ the inequation } \\quad k + 1 &gt; k \\qquad \\mbox{ is equivalent to } \\qquad 1 &gt; 0\n$$\nIn general, if $x , y , z \\in \\mathbb{R}$,\n$$\n\\mbox{ the inequation } \\quad x &gt; y \\qquad \\mbox{ is equivalent to } \\qquad x + z &gt; y + z\n$$\nand you can replace \"$&gt;$\" by \"$\\geq$\", \"$&lt;$\", \"$\\leq$\" or \"$=$\".</p>\n" }, { "answer_id": 2108383, "score": 1, "text": "<p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p>\n\n<p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p>\n\n<ul>\n<li>Axiom 1: $1$ belongs to $N$</li>\n<li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li>\n<li>Axiom 3: For each n belonging to $N$, we have $n' \\ne 1$.</li>\n<li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\\ne n$ implies that $m' \\ne n'$.</li>\n<li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied\n\n<ol>\n<li>$1$ belongs to $M$ </li>\n<li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li>\n</ol></li>\n</ul>\n" } ]
Question: Prove $k+1&gt;k $ where $k$ is an integer Answer 1: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 2: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 3: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 4: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 5: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 6: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 7: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 8: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 9: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 10: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 11: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 12: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 13: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 14: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 15: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 16: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 17: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 18: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 19: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 20: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 21: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 22: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 23: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 24: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 25: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 26: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 27: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 28: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 29: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 30: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 31: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 32: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 33: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 34: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 35: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 36: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 37: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 38: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 39: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 40: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 41: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 42: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 43: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 44: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 45: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 46: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 47: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 48: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 49: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 50: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 51: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul> Answer 52: <p>As stated in the comments, the answer may depend to some extent on what set of axioms for the integers you are using; it also depends on the exact definition of $"a&gt;b"$. But under most axiomatizations, assuming that $"a&gt;b"$ is defined as $"(a-b)\in \mathbb{N}"$ (where $\mathbb{N}$ is the set of positive integers), something like the following proof <strong>by contradiction</strong> would work:</p> <p>$[(k+1)\ngtr k]~~~ \equiv~~~~ [(k+1)-k \notin \mathbb{N}]~~~$(definition of $a&gt;b$)</p> <p>$\implies~~~ [(1+k)-k \notin \mathbb{N}]~~~$ (commutativity of addition)</p> <p>$\implies~~~[1+(k-k) \notin \mathbb{N}]~~~$ (associativity of addition) </p> <p>$\implies~~~[1\notin \mathbb{N}]$</p> <p>Note that $1\in\mathbb{N}$ is often not part of the axioms of the integers, and in general must itself be proved. The original question suggests it has "already" been proved, so here we can take it as a given. Otherwise, a rough outline of the proof that $1\in\mathbb{N}$ would be (under most axiomatizations of the integers, such as the <a href="https://www.math.ucdavis.edu/~osserman/classes/115A-F14/notes/A-axioms.pdf" rel="nofollow noreferrer">one here</a>, that I particularly like):</p> <ol> <li>Prove that for any integer $a$ we have $a\cdot 0=0$. Proof <em>sketch</em>: $0=a\cdot 0 - a\cdot 0$ $= a\cdot (0+0) - a\cdot 0$ $=a\cdot 0$.</li> <li>Prove that for any integer $a$ we have $-a=a\cdot(-1)$. Proof <em>sketch</em>: using the above $0 = a \cdot (1+(-1)) = a+ a\cdot(-1)$.</li> <li>Combine the above with the axiom of closure of $\mathbb{N}$ under multiplication to obtain $-1\notin\mathbb{N}$.</li> <li>Exploit the trichotomy axiom (every integer $a$ must satisfy exactly one of the following three: either $a\in\mathbb{N}$, or its additive inverse $-a\in\mathbb{N}$, or $a=0$ i.e. $a$ is an additive identity) to deduce that $1\neq 0$ since otherwise for one $a\in\mathbb{N}$ we'd have $a=a\cdot 1= a\cdot 0=0$.</li> <li>Exploit the trichotomy axiom again to conclude that since $1\neq 0$ and $-1\notin\mathbb{N}$, then $1\in\mathbb{N}$.</li> </ol> Answer 53: <p>If you assume that $1 &gt; 0$ is true, so it's trivial because, given $k \in \mathbb{Z}$, $$ \mbox{ the inequation } \quad k + 1 &gt; k \qquad \mbox{ is equivalent to } \qquad 1 &gt; 0 $$ In general, if $x , y , z \in \mathbb{R}$, $$ \mbox{ the inequation } \quad x &gt; y \qquad \mbox{ is equivalent to } \qquad x + z &gt; y + z $$ and you can replace "$&gt;$" by "$\geq$", "$&lt;$", "$\leq$" or "$=$".</p> Answer 54: <p>You should search for it in Theory of Numbers. There are axioms for proving it for Natural numbers. Peano's Axioms this can be used to prove it. I hope you solve it.</p> <p>A set of $N$ objects is called a set of natural numbers if it satifies Peano's Axioms:</p> <ul> <li>Axiom 1: $1$ belongs to $N$</li> <li>Axiom 2: to each element $n$ belonging to $N$, there corresponds a unique element $n'$ called sucessor of $n$.</li> <li>Axiom 3: For each n belonging to $N$, we have $n' \ne 1$.</li> <li>Axiom 4: if $m,n$ belong to $N$ then $m' = n'$ implies that $m=n$, or $m\ne n$ implies that $m' \ne n'$.</li> <li>Axiom 5: et $M$ be a set of elements of $N$, i.e. $M$ is a subset of $N$ then $M=N$ provided the following two conditions are satisfied <ol> <li>$1$ belongs to $M$ </li> <li>If $k$ belongs to $M$ then $k'$ belongs to $M$, $k'$ being sucessor of $k$.</li> </ol></li> </ul>
Prove that $EX=E(E(X|Y))$
2,108,697
9
https://math.stackexchange.com/questions/2108697/prove-that-ex-eexy
<blockquote> <p>Prove that $EX=E(E(X|Y))$</p> </blockquote> <p>I know that I should prove it from definition of conditional distribution and conditional expected value, but I don't know how.</p> <p>I have also looked at theorem("Total expectation") which should be connected with the proof.</p> <p>If someone would tell me, which properties of expectation I should use to prove it, I would be really glad.</p>
[ { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" }, { "answer_id": 2110224, "score": 24, "text": "<p>\\begin{eqnarray*} \\mathbb{E}(\\mathbb{E}(X|Y)) &amp; = &amp; \\int_{-\\infty}^{\\infty} \\mathbb{E}(X|Y = y) f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) dx \\ f_Y(y) dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dx dy \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}xf_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} x\\int_{-\\infty}^{\\infty}f_{X,Y}(x,y) dy dx \\\\ &amp; = &amp; \\int_{-\\infty}^{\\infty} xf_{X}(x) dx \\\\ &amp; = &amp; \\mathbb{E}(X) \\end{eqnarray*} </p>\n" }, { "answer_id": 2110189, "score": 7, "text": "<p>By definition $E(X\\mid Y)$ is random variable which satisfies:\n$$\nE(X \\cdot 1_G) = E\\bigl(E(X\\mid Y)\\cdot 1_G\\bigr), \\quad \\text{for all} \\ G \\in \\sigma(Y).\n$$\nSimply take $G=\\Omega$ such that $1_G=1$ and you get the desired result.</p>\n" } ]
Question: Prove that $EX=E(E(X|Y))$ Answer 1: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 2: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 3: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 4: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 5: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 6: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 7: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 8: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 9: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 10: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 11: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 12: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 13: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 14: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 15: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 16: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 17: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 18: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 19: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 20: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 21: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 22: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 23: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 24: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 25: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 26: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 27: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 28: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 29: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 30: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 31: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 32: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 33: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 34: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p> Answer 35: <p>\begin{eqnarray*} \mathbb{E}(\mathbb{E}(X|Y)) &amp; = &amp; \int_{-\infty}^{\infty} \mathbb{E}(X|Y = y) f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) dx \ f_Y(y) dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X|Y}(x|y) f_Y(y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dx dy \\ &amp; = &amp; \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}xf_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} x\int_{-\infty}^{\infty}f_{X,Y}(x,y) dy dx \\ &amp; = &amp; \int_{-\infty}^{\infty} xf_{X}(x) dx \\ &amp; = &amp; \mathbb{E}(X) \end{eqnarray*} </p> Answer 36: <p>By definition $E(X\mid Y)$ is random variable which satisfies: $$ E(X \cdot 1_G) = E\bigl(E(X\mid Y)\cdot 1_G\bigr), \quad \text{for all} \ G \in \sigma(Y). $$ Simply take $G=\Omega$ such that $1_G=1$ and you get the desired result.</p>
Show that three complex numbers $z_1, z_2, z_3$ are collinear iff $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$
2,109,196
9
https://math.stackexchange.com/questions/2109196/show-that-three-complex-numbers-z-1-z-2-z-3-are-collinear-iff-operatorname
<p>I need to show that $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0 \iff z_1,z_2,$ and $z_3$ are collinear.</p> <p>I know that $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$ implies that $\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1 \in \mathbb{R}$, but I am not sure how to argue in either direction. Please help. Thank you</p>
[ { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" }, { "answer_id": 2109247, "score": 4, "text": "<p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\\in\\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$\nThe problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write \n$$\nt=\\frac{z_1-z_2}{z_1-z_3}.\n$$\nMultiplying both sides of this equation by $|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_3)$ we get\n\\begin{align}\nt|z_1-z_3|^2=(\\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1+\\bar z_3z_2\\\\\n=&amp;|z_1|^2-\\bar z_1z_2 -\\bar z_3z_1\\underbrace{-\\bar z_2z_3+\\bar z_2z_3}\n_{=0}+\\bar z_3z_2.\\\\\n\\end{align}\nExpressing this as\n$$\nt|z_1-z_3|^2-|z_1|^2-(\\overline{\\bar z_3z_2}+\\bar z_3z_2)=-(\\bar z_1z_2 +\\bar z_2z_3+\\bar z_3z_1)\n$$\nand noticing that the LHS is a real number leads to the desired conclusion.</p>\n" }, { "answer_id": 2109265, "score": 4, "text": "<p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\\dfrac12{\\bf Im}(\\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have\n\\begin{eqnarray}\n\\dfrac12{\\bf Im}\\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3-\\overline{z_2}z_2-\\overline{z_1}z_3&amp;=&amp;0\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+{\\bf Im}\\overline{z_3}z_1&amp;=&amp;{\\bf Im}\\overline{z_2}z_2+\\overline{z_1}z_3+{\\bf Im}\\overline{z_3}z_1\\\\\n{\\bf Im}\\overline{z_1}z_2+\\overline{z_2}z_3+\\overline{z_3}z_1&amp;=&amp;0\n\\end{eqnarray}</p>\n\n<p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p>\n" }, { "answer_id": 2109430, "score": 2, "text": "<p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p>\n\n<p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \\overline{z_1}z_2 + \\overline{z_2}{z_3} + \\overline{z_3}{z_1}$ takes a real value or not. </p>\n\n<p>And indeed, we'll see that</p>\n\n<p>$$\\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big) = \\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big).$$</p>\n\n<p>Observe that</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2 - z_1}(z_3 - z_1)\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\Big(\\overline{z_2}z_3 - \\overline{z_1}z_3 - \\overline{z_2}z_1 + \\overline{z_1}{z_1}\\Big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_1\\big) + \\underbrace{\\operatorname{Im}\\big(|z_1|^2\\big)}_{=0}\n\\end{align*}</p>\n\n<p>where $\\overline{z_1}{z_3} = \\overline{z_1\\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\\operatorname{Im}\\big(z_1\\overline{z_3}) = - \\operatorname{Im}(\\overline{z_1}z_3)$. Now picking up where we left off,</p>\n\n<p>\\begin{align*}\n\\operatorname{Im}\\Big(f(0, z_2 - z_1, z_3 - z_1)\\Big)\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) - \\operatorname{Im}\\big(\\overline{z_1}z_3\\big) - \\operatorname{Im}\\big(\\overline{z_2}z_3\\big) \\\\[7pt]\n&amp;= \\operatorname{Im}\\big(\\overline{z_2}z_3) + \\operatorname{Im}\\big(\\overline{z_3}z_1\\big) + \\operatorname{Im}\\big(\\overline{z_1}z_2\\big)\\\\\n&amp;= \\operatorname{Im}\\Big(f(z_1, z_2, z_3)\\Big)\n\\end{align*}</p>\n\n<p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \\in \\Bbb R$.</p>\n\n<p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \\in \\Bbb R$. </p>\n\n<p>But since $f(0, w_2, w_3) = \\overline{w_2}w_3$, this is just the well-known fact that $\\overline{w_2}w_3 \\in \\Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>\n" } ]
Question: Show that three complex numbers $z_1, z_2, z_3$ are collinear iff $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$ Answer 1: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 2: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 3: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 4: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 5: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 6: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 7: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 8: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 9: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 10: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 11: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 12: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 13: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 14: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 15: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 16: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 17: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 18: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 19: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 20: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 21: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 22: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 23: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 24: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 25: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 26: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 27: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 28: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 29: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 30: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 31: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 32: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 33: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 34: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 35: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 36: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 37: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 38: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 39: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 40: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 41: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 42: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 43: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 44: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 45: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 46: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 47: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 48: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 49: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 50: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 51: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p> Answer 52: <p>We know that $z_1,z_2,z_3$ are collinear iff there exists some $t\in\mathbb{R}$ such that $$t(z_1-z_3)=z_1-z_2.$$ The problem becomes trivial if $z_1=z_3$ so we can suppose this is not the case, and then it is legal to write $$ t=\frac{z_1-z_2}{z_1-z_3}. $$ Multiplying both sides of this equation by $|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_3)$ we get \begin{align} t|z_1-z_3|^2=(\overline{z_1-z_3})(z_1-z_2)=&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1+\bar z_3z_2\\ =&amp;|z_1|^2-\bar z_1z_2 -\bar z_3z_1\underbrace{-\bar z_2z_3+\bar z_2z_3} _{=0}+\bar z_3z_2.\\ \end{align} Expressing this as $$ t|z_1-z_3|^2-|z_1|^2-(\overline{\bar z_3z_2}+\bar z_3z_2)=-(\bar z_1z_2 +\bar z_2z_3+\bar z_3z_1) $$ and noticing that the LHS is a real number leads to the desired conclusion.</p> Answer 53: <p>The basic concept for this answer is the area of a triangle. The area of a triangle which is made on vectors $z$ and $w$ is $A=\dfrac12{\bf Im}(\bar{z}w)$. for three points in complex plane, we make a triangle on them with sides $z_1-z_2$, $z_2-z_3$ and $z_3-z_1$, we have \begin{eqnarray} \dfrac12{\bf Im}\overline{(z_1-z_2)}(z_2-z_3)&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3-\overline{z_2}z_2-\overline{z_1}z_3&amp;=&amp;0\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+{\bf Im}\overline{z_3}z_1&amp;=&amp;{\bf Im}\overline{z_2}z_2+\overline{z_1}z_3+{\bf Im}\overline{z_3}z_1\\ {\bf Im}\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1&amp;=&amp;0 \end{eqnarray}</p> <p>$z_1$, $z_2$ and $z_3$ are collinear, if and only if, the area of triangle made on them is zero.</p> Answer 54: <p>Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. </p> <p>To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. </p> <p>And indeed, we'll see that</p> <p>$$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$</p> <p>Observe that</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &amp;= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*}</p> <p>where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off,</p> <p>\begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &amp;= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &amp;= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &amp;= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*}</p> <p>Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$.</p> <p>Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. </p> <p>But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.</p>
Compute the following integral: $I = \int_1^\infty \log^2 \left(1-\frac 1 x\right) \, dx$
2,109,532
9
https://math.stackexchange.com/questions/2109532/compute-the-following-integral-i-int-1-infty-log2-left1-frac-1-x-righ
<p>Compute $$I = \int_1^\infty \log^2 \left(1-\frac 1 x\right) \, dx$$ I made the substitution: $$t=\frac 1 x$$ It follows: $$I=\int_0^1 \frac{\log^2(1-t)}{t^2} \, dt$$ My next step would be to compute the derivative of the following integral with parameter $y$, w.r.t to $y$: $$F(y)=\int_0^1 \frac{\log^2(y-t)}{t^2} \, dt$$ Or something like this. I think would be a nice solution to use this kind of approach. But I am getting stuck after computing the derivative.</p>
[ { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2109552, "score": 19, "text": "<p>Once you get\n$$ I = \\int_{0}^{1}\\frac{\\log^2(1-t)}{t^2}\\,dt=\\int_{0}^{1}\\frac{\\log^2(t)}{(1-t)^2}\\,dt \\tag{1}$$\nit is enough to recall that\n$$ \\frac{1}{(1-t)^2}=\\sum_{n\\geq 0}(n+1) t^n,\\qquad \\int_{0}^{1}t^n\\log^2(t)\\,dt = \\frac{2}{(n+1)^3}\\tag{2} $$\nto have:\n$$ I = \\sum_{n\\geq 0}\\frac{2}{(n+1)^2} = \\color{red}{\\frac{\\pi^2}{3}}.\\tag{3}$$</p>\n" }, { "answer_id": 2109601, "score": 10, "text": "<blockquote>\n <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\\int_0^1 \\frac{\\log(1-x)}{x}\\,dx=\\text{Li}_2(1)=\\frac{\\pi^2}{6}$. To that end, we proceed.</strong></p>\n</blockquote>\n\n<hr>\n\n<p>Let $I$ be given by </p>\n\n<p>$$I=\\int_1^\\infty \\log^2\\left(1-\\frac1x\\right)\\,dx=\\int_0^1\\frac{\\log^2(1-x)}{x^2}\\,dx$$</p>\n\n<p>Integrating by parts with $u=\\log^2(1-x)$ and $v=-1/x$ reveals</p>\n\n<p>$$\\begin{align}\nI&amp;=\\lim_{\\epsilon\\to 0}\\left(\\left.\\left(-\\frac{\\log^2(1-x)}{x}\\right)\\right|_{0}^{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{x(1-x)}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(\\frac{-\\log^2(\\epsilon)}{1-\\epsilon}-2\\int_0^{1-\\epsilon}\\frac{\\log(1-x)}{1-x} -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=\\lim_{\\epsilon\\to 0}\\left(-\\frac{\\log^2(\\epsilon)}{1-\\epsilon}+2\\int_0^{1-\\epsilon}\\frac12\\frac{d\\log^2(1-x)}{dx}\\,dx -2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\right)\\\\\\\\\n&amp;=-2\\int_0^1\\frac{\\log(1-x)}{x}\\,dx\\\\\\\\\n&amp;=\\frac{\\pi^2}{3}\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" } ]
Question: Compute the following integral: $I = \int_1^\infty \log^2 \left(1-\frac 1 x\right) \, dx$ Answer 1: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 2: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 3: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 4: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 5: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 6: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 7: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 8: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 9: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 10: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 11: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 12: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 13: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 14: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 15: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 16: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 17: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 18: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 19: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 20: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 21: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 22: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 23: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 24: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 25: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 26: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 27: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 28: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 29: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 30: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 31: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 32: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 33: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 34: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p> Answer 35: <p>Once you get $$ I = \int_{0}^{1}\frac{\log^2(1-t)}{t^2}\,dt=\int_{0}^{1}\frac{\log^2(t)}{(1-t)^2}\,dt \tag{1}$$ it is enough to recall that $$ \frac{1}{(1-t)^2}=\sum_{n\geq 0}(n+1) t^n,\qquad \int_{0}^{1}t^n\log^2(t)\,dt = \frac{2}{(n+1)^3}\tag{2} $$ to have: $$ I = \sum_{n\geq 0}\frac{2}{(n+1)^2} = \color{red}{\frac{\pi^2}{3}}.\tag{3}$$</p> Answer 36: <blockquote> <p><strong>I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed.</strong></p> </blockquote> <hr> <p>Let $I$ be given by </p> <p>$$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$</p> <p>Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals</p> <p>$$\begin{align} I&amp;=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &amp;=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &amp;=\frac{\pi^2}{3} \end{align}$$</p> <p>as was to be shown!</p>
Intuitively, why are there 4 classical Lie groups/algebras?
2,109,581
9
https://math.stackexchange.com/questions/2109581/intuitively-why-are-there-4-classical-lie-groups-algebras
<p>I would like to understand the big picture in mathematics. Lie groups and Lie algebras seem to play a central role in bridging analysis and algebra. I'm curious to understand, intuitively, why there are 4 classical Lie groups/algebras $A_{n},B_{n},C_{n},D_{n}$ and what are their various natures.</p> <ul> <li>$A_{n}$ special linear Lie algebra $\mathfrak {sl}_{n+1}$</li> <li>$B_{n}$ odd-dimensional special orthogonal Lie algebra $\mathfrak {so}_{2n+1}$</li> <li>$C_{n}$ symplectic Lie algebra $\mathfrak {sp}_{2n}$</li> <li>$D_{n}$ even-dimensional special orthogonal Lie algebra $\mathfrak {so}_{2n}$</li> </ul> <p>These writings raise my curiosity:</p> <ul> <li><em>Each of the four types</em> $W, S, H, K$ <em>of simple primitive Lie algebras</em> $(L, L_{0})$ <em>correspond to the four most important types of geometries of manifolds: all manifolds, oriented manifolds, symplectic and contact manifolds.</em> <a href="https://arxiv.org/pdf/math/9912235.pdf" rel="nofollow noreferrer">Victor Kac</a>, page 20.</li> <li><em>Whenever we pick a Dynkin diagram and a field we get a geometry</em>: $A_{n}$ projective, $B_{n}$ $D_{n}$ conformal, $C_{n}$ symplectic. <a href="http://math.ucr.edu/home/baez/week186.html" rel="nofollow noreferrer">John Baez, in his blog.</a></li> <li><a href="http://math.ucr.edu/home/baez/octonions/node13.html" rel="nofollow noreferrer">John Baez, in writing about the octonions,</a> combines the orthogonal Lie algebras/groups so as to have three families. <em>These arise naturally as symmetry groups of projective spaces over</em> $\mathbb{R}$, $\mathbb{C}$, <em>and</em> $\mathbb{H}$, <em>respectively.</em> @JakobH and @arctic-tern discuss that at <a href="https://math.stackexchange.com/questions/1764025/intuitive-explanation-for-the-connection-between-lie-groups-and-projective-space">this post</a>.</li> </ul> <p>I imagine that I could study the associated Weyl groups and the infinite families of polytopes (simplexes, cross-polytopes, hypercubes, demicubes) which they are symmetry groups for. I write my thoughts about that <a href="http://www.ms.lt/sodas/Book/DiscoveryInMathematics" rel="nofollow noreferrer">here</a>. I imagine that I should study the bilinear and sesquilinear forms as described in this <a href="https://en.wikipedia.org/wiki/Classical_group" rel="nofollow noreferrer">Wikipedia article on the classical groups</a>.</p> <p>Concretely, I've been looking at the Dynkin diagrams and learning how they relate to the Cartan matrices and the orthogonal bases used to describe the root systems. The Dynkin diagrams are, in each case, chains that differ only at one end. The links in the chain show that two fundamental roots are not orthogonal but differ by 120 degrees. I think the associated Cartan matrix can be thought of as describing the looseness in the root system, that is, the number of times that a root $\alpha$ can be added to $-\beta$ and stay within the root system. The Cartan matrix needs to describe an independent basis and thus needs to be nondegenerate with determinant nonzero, and in fact, we can write the Cartan matrix as $D\cdot S$ where $S$ is positive-definite. We can thereby show that if the Dynkin diagram grows in certain ways, for example, if it forms a cycle, then the determinant will become zero, the system will collapse. The determinant of the $n\times n$ Cartan matrix for a chain grows by the recursion formula $d_{n+1}=2d_{n}-d_{n-1}$, which is to say, $d_{n}=(d_{n-1}+d_{n+1})/2$. In the case of $A_{n}$ the initial conditions are $d_{1}=2$ and $d_{2}=3$ so it grows further $4,5,6...$. But in the case of $B_{n}$ and $C_{n}$ we have $d_{1}=2$ and $d_{2}=2$ and so the determinants stay constant $2,2,2...$ and likewise for $D_{n}$ we have $4,4,4...$ The point is that the one end of the chain constrains what is possible at the other end of the chain. The constraint is given by the fact that the wrong link at the end will make the whole system collapse. It seems like a system for propagating a signal or for describing entanglement. </p> <p>But what I'm trying to understand is how to explain the four possible options for the Dynkin diagrams and what they imply about the characteristics of the Lie algebras/groups. I am trying to explore the relationship between the Cartan matrices and the fundamental roots $\left \{ e_{i}-e_{i+1} | 1\leq i &lt; n\right \}$ with each diagram distinguished by an additional root as follows:</p> <ul> <li>$A_{n}: e_{n}-e_{n+1}$</li> <li>$B_{n}: e_{n}$</li> <li>$C_{n}: 2e_{n}$</li> <li>$D_{n}: e_{n-1}+e_{n}$</li> </ul> <p>Each of these new roots maintains an independent system of roots but the cases $B_{n}, C_{n}, D_{n}$ "economize" and thereby compromise the options for expanding the system.</p> <p>Finally, I'm noticing based on what I've read at <a href="https://en.wikipedia.org/wiki/Positive-definite_matrix" rel="nofollow noreferrer">Wikipedia about positive-definite matrices</a> that the formula $z^{\mathrm{T}}Sz=0$, with $D\cdot S$ equalling the Cartan matrix as discussed above, seems to work like a formula in logic. For example, for $A_{2}$ we get $z_{1}^{2}+(z_{1}-z_{2})^{2}+z_{2}^{2}=0$ forcing $z_{1}=0,z_{1}=z_{2},z_{2}=0$, which means that the vector $z=0$. Whereas for a cycle we could get $(z_{1}-z_{2})^{2}+(z_{2}-z_{3})^{2}+(z_{3}-z_{1})^{2}=0$ allowing a degenerate, nonzero solution $z_{1}=z_{2}=z_{3}$. Setting</p> <p>$S = \begin{pmatrix} k &amp; \frac{-1}{k}\\ \frac{-1}{k} &amp; 1 \end{pmatrix}$</p> <p>to handle the cases $A_{2} (k=1), B_{2} (k=2), G_{2} (k=3)$, we get the equation</p> <p>$(1-\frac{k}{4})z_{2}^2+k(z_{1}-\frac{z_{2}}{2})^2=0$</p> <p>which forces us to have solutions $z_{2}=0$ and $z_{1}=z_{2}/2$ and also shows that we we will have degenerate solutions for $k\geq 4$.</p> <p>I have yet to find an intuitive explanation why there are 4 classical Lie algebras/groups. I would very much appreciate one. Otherwise, I suspect an answer might be found by working from the most concrete vantage point rather than the most abstract one. Thank you for helping me.</p>
[ { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" }, { "answer_id": 2640677, "score": 3, "text": "<p>I will share some progress that I am making in answering my own question. It comes from studying <a href=\"http://math.mit.edu/classes/18.745/classnotes.html\" rel=\"nofollow noreferrer\">Victor Kac's lecture notes</a>, and in particular, <a href=\"http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf\" rel=\"nofollow noreferrer\">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p>\n\n<p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \\rightarrow 2 \\rightarrow 3 \\rightarrow \\dots n-1 \\rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \\dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p>\n\n<p>$ \\begin{array}{1|2]3|4} A_{n} &amp; n \\rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\\\ B_{n} &amp; n \\rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\\\ C_{n} &amp; n \\rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\\\ D_{n} &amp; n \\rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\\\ \\end{array} $</p>\n\n<p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \\rightarrow x_{n} \\rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p>\n\n<p><a href=\"https://i.stack.imgur.com/qtb5y.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qtb5y.png\" alt=\"enter image description here\"></a></p>\n\n<p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \\leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \\leftrightarrow -x_{n}$.</p>\n\n<p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p>\n\n<p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href=\"http://www.ms.lt/sodas/Book/Duality\" rel=\"nofollow noreferrer\">here are my notes</a>) of the <a href=\"https://en.wikipedia.org/wiki/List_of_dualities\" rel=\"nofollow noreferrer\">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p>\n\n<p><a href=\"https://i.stack.imgur.com/VcVZD.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/VcVZD.png\" alt=\"enter image description here\"></a></p>\n\n<p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p>\n\n<p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href=\"http://www.ms.lt/sodas/Book/20171011DisembodyingMind\" rel=\"nofollow noreferrer\">my study of cognitive frameworks</a>.</p>\n\n<p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p>\n\n<ul>\n<li>The concatenation of three paths.</li>\n<li>The intersections of three lines (on which we travel backwards and\nforwards).</li>\n<li>Three angles that divide the triangle.</li>\n<li>The oriented area of the triangle that we sweep out as we go around\nit.</li>\n</ul>\n\n<p><a href=\"https://i.stack.imgur.com/hLzoY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/hLzoY.png\" alt=\"enter image description here\"></a></p>\n\n<p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href=\"http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf\" rel=\"nofollow noreferrer\">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p>\n\n<p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p>\n\n<p><a href=\"https://i.stack.imgur.com/RiVUs.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/RiVUs.png\" alt=\"Building blocks of root systems\"></a></p>\n\n<p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p>\n\n<p><a href=\"https://i.stack.imgur.com/oxDFU.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/oxDFU.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p>\n\n<p><a href=\"https://i.stack.imgur.com/InUnA.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/InUnA.png\" alt=\"enter image description here\"></a></p>\n\n<p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube \"top down\" by dividing the whole volume rather than \"bottom up\" by assembling the vertices. </p>\n\n<p><a href=\"https://i.stack.imgur.com/qVZrG.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/qVZrG.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p>\n\n<p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as \"coordinate systems\" (a symplex defined in \"one quadrant\"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p>\n\n<p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p>\n" }, { "answer_id": 4038796, "score": 1, "text": "<p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class=\"math-container\">$\\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class=\"math-container\">$\\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p>\n<p>In the reference above, Coxeter relies heavily on Cartan's paper <a href=\"https://link.springer.com/article/10.1007/BF02409989\" rel=\"nofollow noreferrer\">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>\n" } ]
Question: Intuitively, why are there 4 classical Lie groups/algebras? Answer 1: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 2: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 3: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 4: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 5: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 6: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 7: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 8: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 9: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 10: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 11: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 12: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 13: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 14: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 15: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 16: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 17: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 18: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 19: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 20: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 21: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 22: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 23: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 24: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 25: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 26: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 27: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 28: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 29: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 30: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 31: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 32: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 33: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 34: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p> Answer 35: <p>I will share some progress that I am making in answering my own question. It comes from studying <a href="http://math.mit.edu/classes/18.745/classnotes.html" rel="nofollow noreferrer">Victor Kac's lecture notes</a>, and in particular, <a href="http://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf" rel="nofollow noreferrer">Lecture 15</a>, and thinking concretely about how the simple roots arise from pairs of the diagonal entries of the matrices of the Cartan subalgebra. Perhaps somebody will have further or better thoughts. Basically, I may have found the metaphysical or cognitive bottleneck (in the good sense) that I am looking for, but now I need to link that to meaningful constraints on Lie Algebras and Lie Groups.</p> <p>Here is the closest that I seem to get to the crux. Imagine a counting process $1 \rightarrow 2 \rightarrow 3 \rightarrow \dots n-1 \rightarrow n$ which we encode algebraically as $ x_{1}-x_{2}, x_{2}-x_{3}, \dots , x_{n-1}-x_{n} $. All of the classical Lie algebras share this set of simple roots. We can think of this as a signal propagating forward. The classical Lie algebras are distinguished by one additional simple root for which there are four allowed possibilities:</p> <p>$ \begin{array}{1|2]3|4} A_{n} &amp; n \rightarrow n+1 &amp; x_{n}-x_{n+1} &amp; = x_{n}-x_{n+1} \\ B_{n} &amp; n \rightarrow 0 &amp; x_{n}-0 &amp; = x_{n} \\ C_{n} &amp; n \rightarrow -n &amp; x_{n}- -x_{n} &amp; = 2x_{n} \\ D_{n} &amp; n \rightarrow -(n-1) &amp; x_{n}- -x_{n-1} &amp; = x_{n-1}+x_{n} \\ \end{array} $</p> <p>I interpret $A_{n}$ as the possibility that the sequence continues propagating forward. Otherwise the sequence is reflected backward. The sequence can take us first to an external mirror $0$ from which we would proceed to count backwards $-x_{n},-x_{n-1},\dots,-x_{1}$. Or the sequence can take us from $x_{n}$ directly to its reflection $-x_{n}$. Or it can turn out that $x_{n}$ is itself a mirror and so the counting proceeds $x_{n-1} \rightarrow x_{n} \rightarrow -x_{n-1}$ where we may think of $x_{n}$ and $-x_{n}$ as intrinsically indistinguishable conjugates.</p> <p><a href="https://i.stack.imgur.com/qtb5y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qtb5y.png" alt="enter image description here"></a></p> <p>The point is that there is a duality between counting forwards and counting backwards. (I imagine it as an outside view, watching the count forwards, like a tube growing, and then an inside view, where we go inside the sequence, counting backwards.) So long as we are counting forwards, the duality is potential. Once we start counting backwards, then the duality is manifest. And there are three ways that it can manifest itself: with an external mirror $0 \leftrightarrow -0$, without an explicit mirror, and with an internal mirror $x_{n} \leftrightarrow -x_{n}$.</p> <p>This is a rather straightforward analysis of the options for the diagonal entries of the Cartan subalgebra in Victor Kac's Lecture 15. The adjoint representation is relating pairs of these entries. The one subtle point is that in the case of the orthogonal Lie algebras $B_{n}$ and $D_{n}$ the matrices in the algebras have zeros on their anti-diagonal with the consequence that the adjoint representation never pairs $x_{i}$ with $-x_{i}$. But $B_{n}$ gets around this because it is odd-dimensional and so has a diagonal element $x_{n+1}=-x_{n+1}=0$ and thus can pair $x_{n}$ and $0$ yielding the simple root $x_{n}-0$. For this reason $B_{n}$ is similar to the symplectic algebra $C_{n}$ which can pair $x_{n}$ and $-x_{n}$ directly yielding the simple root $x_{n}- -x_{n} = 2x_{n}$.</p> <p>This may all be very meaningful in that cognitively, mathematics seems to be all about duality and its subtleties. I think of logic as a perfect duality, apparent in de Morgan's laws, between what is and what is not, which in set theory is the duality of an empty set and a total set. Mathematics then seems to consist of subtle deviations from this perfect duality, for example, the difference in topology between arbitrary unions and finite intersections. I did a metaphysical study (<a href="http://www.ms.lt/sodas/Book/Duality" rel="nofollow noreferrer">here are my notes</a>) of the <a href="https://en.wikipedia.org/wiki/List_of_dualities" rel="nofollow noreferrer">mathematical dualities listed in Wikipedia</a> and organized them in the diagram below. Basically, some of the dualities are explicit in mathematical notation and others are implicit in the mind, as with complex conjugation.</p> <p><a href="https://i.stack.imgur.com/VcVZD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VcVZD.png" alt="enter image description here"></a></p> <p>The cognitive bottleneck which I describe above seems to be near the very heart of the connection betwen Lie groups and Lie algebras. Now I am trying to understand the implications by working them out for both the Lie groups and for the Lie algebras. I keep in mind that it may turn out that the Lie machinery is an accident of history and not the most profound way of describing what is going on at the deepest level. For example, it may or may not turn out that cognitively the exceptional Lie groups are spurious. I will list some connections that I am trying to make and understand or conclude otherwise.</p> <p>I need to understand the link between the simple roots and the conjugate transposes (real, complex, quaternion) and the inner products (real, complex, quaternion) which the Lie groups preserve and are thus defined by. Apparently, the most ordinary Lie group $A_{n}$ is defined by the complex inner product (rather than, say, the real inner product). This means that the complex inner product based on two dimensions is the most ordinary. I imagine it may be expressing the duality of the options of counting forwards and backwards. The symplectic group is defined by the quaternion inner product and this would make sense if we notice that it is establishing two sequences, thus making for four dimensions. The orthogonal groups are defined by the real inner product and they can be thought of as carrying a single dimension because they are counting only one side of the sequence, outside (when there is an external mirror?) or inside (when there is an internal mirror?). Taken together the options define 2 + 4 + 1 + 1 = 8 dimensions. And this brings to mind the Bott periodicity clock or the Clifford algebra clock. I also have an 8-cycle in <a href="http://www.ms.lt/sodas/Book/20171011DisembodyingMind" rel="nofollow noreferrer">my study of cognitive frameworks</a>.</p> <p>I would like to try to relate the Lie groups with four geometries: affine, projective, conformal and symplectic. And perhaps more basically, with four ways of thinking about a triangle:</p> <ul> <li>The concatenation of three paths.</li> <li>The intersections of three lines (on which we travel backwards and forwards).</li> <li>Three angles that divide the triangle.</li> <li>The oriented area of the triangle that we sweep out as we go around it.</li> </ul> <p><a href="https://i.stack.imgur.com/hLzoY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLzoY.png" alt="enter image description here"></a></p> <p>I want to imagine that these get expressed as affine, projective, conformal and symplectic geometries, respectively. In <a href="http://www.ms.lt/derlius/20171001AndriusKulikauskas-AGeometryOfMoods.pdf" rel="nofollow noreferrer">my study of emotion and moods</a>, I have found the different ways of thinking about a triangle to be useful in grounding six transformations which take us from a less specified geometry to a more specified one. I need to learn what gets preserved and how ($A_{n}$ - volume, $C_{n}$ - oriented surface area?, $B_{n}$ and $D_{n}$ - distance. I am thinking that complex derivatives (that is, holomorphic functions where the first derivative is nonzero) preserve angles and so $A_{n}$ expresses conformal geometry. And I am imagining that of $B_{n}$ and $D_{n}$, perhaps the first has an explicit origin and is thus the basis for projective geometry, whereas the second does not have an explicit origin and is the basis for affine geometry. </p> <p>Finally, I am trying to think of the root systems as constructing infinite families of polytopes. Here it may be helpful to identify each eigenvalue with a vertex and each root with an edge of a polytope. We have the following kinds of building blocks:</p> <p><a href="https://i.stack.imgur.com/RiVUs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RiVUs.png" alt="Building blocks of root systems"></a></p> <p>$A_{n}$ can be identified with the simplexes. I think of the simplex as having a unique center - the -1 simplex - which keeps moving as the simplex grows. Similarly, the simplex has a unique volume. The binomial expansion then codes for all of the subsimplexes. </p> <p><a href="https://i.stack.imgur.com/oxDFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oxDFU.png" alt="enter image description here"></a></p> <p>Now I think of the cross-polytopes as being generated by the center creating pairs of vertices, positive and negative, which are linked to all of the existing vertices but not to each other. This accords with the root system $C_{n}$. If we look at the triangle which counts the sub-polytopes, then we see that it has a unique center but the largest number counts the surface area, which is to say, there is no volume!</p> <p><a href="https://i.stack.imgur.com/InUnA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InUnA.png" alt="enter image description here"></a></p> <p>The duals of the cross-polytopes are the hypercubes, which can be thought of as generated by mirrors (hyperplanes). They have a volume but no center! I don't see how to relate $B_{n}$ with such a hypercube. Although perhaps the 0 is the total volume and it lets us construct our hypercube "top down" by dividing the whole volume rather than "bottom up" by assembling the vertices. </p> <p><a href="https://i.stack.imgur.com/qVZrG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVZrG.png" alt="enter image description here"></a></p> <p>Finally, I expect there should be an infinite family of polytopes $D_{n}$ with no center and no volume. But I don't think it is the demihypercubes. I need to try to interpret the root system as a polytope.</p> <p>However, just to note the problematic nature of my own thinking, I had originally observed and supposed that the simplexes could be used to define a vector (leading from the center to a vertex) and affine geometry. The cross polytopes could define a line (going through the center in opposite directions to a pair of vertices) and projective geometry. The hypercubes could define angles and conformal geometry. And the final polytopes, which I imagined as "coordinate systems" (a symplex defined in "one quadrant"), could define oriented areas and symplectic geometry. This would have related $A_{n}$ - affine geometry, $B_{n}$ - projective geometry, $C_{n}$ - conformal geometry and $D_{n}$ - symplectic geometry. But in learning more I am realizing, in my thinking above, that it should be, if anything, $A_{n}$ - conformal geometry, $B_{n}$ - projective geometry, $C_{n}$ - symplectic geometry and $D_{n}$ - affine geometry. </p> <p>I appreciate any thoughts, but especially help finding my way in learning about Lie groups and Lie algebras. I also look forward to alternative answers, intuitively, why are there four classical Lie groups / algebras?</p> Answer 36: <p>You may find insight in Coxeter's work on the subject, which leans heavily on classical geometry. In <em>Regular Polytopes,</em> <span class="math-container">$\S$</span>11-5, Coxeter achieves the result in question by studying the fundamental region of reflection groups (I also recommend reviewing the historical context in <span class="math-container">$\S$</span>11-x). Intuition is subjective, but I find Coxeter's style as a classical geometer highly intuitive.</p> <p>In the reference above, Coxeter relies heavily on Cartan's paper <a href="https://link.springer.com/article/10.1007/BF02409989" rel="nofollow noreferrer">La géométrie des groupes simples</a>. This paper is extremely lucid, and has also been a significant source of intuition for me on this subject.</p>
How can I solve $u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$ with the method of characteristics.
2,109,856
9
https://math.stackexchange.com/questions/2109856/how-can-i-solve-u-xt-uu-xx-frac12u-x2-0-with-the-method-of-cha
<p>I am trying to solve the following PDE: $u_{xt} + uu_{xx} = -\frac{1}{2}u_x^2$, with initial condition: $u(x,0) = u_0(x) \in C^{\infty}$ using the method of characteristics.</p> <p>I am a beginner with the method of characteristics and PDE in general. Here is what I have so far.</p> <p>Define $\gamma(x,t)$ as the characteristic curves.</p> <p>$\frac{\partial}{\partial t} u_x(\gamma(x,t),t) = u_{xt} + u_{xx}\gamma_t(x,t) = - \frac{1}{2}u_x^2$</p> <p>Set $u_t = u_x$</p> <p>$\Rightarrow \frac{\partial}{\partial t} u_x(\gamma(x,t),t)= (u_t)_x + u_{xx}\gamma_t(x,t)$</p> <p>$ = u_{xx} + u_{xx}\gamma_t = - \frac{1}{2}u_x^2$</p> <p>From this I get $\gamma_t = -\frac{1}{2}\frac{u_x^2}{u_{xx}} - 1$</p> <p>However, I am not sure this is the right approach and do not fully understand how to use the method of characteristics when the solution $u(x,t)$ is constant on the characteristic curves.</p> <p>Any help is much appreciated.</p> <p>Edit: I made some progress by using $v=u_x$ and getting $\frac{dv}{dt} = \frac{-1}{2} v^2$ and $\frac{\partial x}{\partial{t}} = 1$. Then separating the first ODE, I get $\frac{2}{v} = t + c$. However, I am not sure if my solution after integrating with respect to $x$ and using the initial condition is correct. I end up with $u(x,t) = \frac{2}{t+c}x + c_1$, $u(x,0) = \frac{2}{c}x + c_1$.</p>
[ { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" }, { "answer_id": 2250737, "score": 2, "text": "<p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\\ u_{xx}+\\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant.\n$$\n\\text{Solve }u_{xy}+u\\ u_{xx} + u_x^2=0\\text{ subject to }u(x,0)=f(x).\\qquad(\\star)\n$$\nThere are three differences between this question and that which was asked originally.</p>\n\n<ol>\n<li>The coefficient of $u_x^2$ has changed from $\\frac{1}{2}$ to $1$.</li>\n<li>The variable $t$ has been renamed to $y$.</li>\n<li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li>\n</ol>\n\n<p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p>\n\n<p>Note: I will not delve into regularity of the solutions in this answer.</p>\n\n<p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as\n$$\n\\frac{\\partial}{\\partial x}\\left(u_y+u\\ u_x\\right)=0.\n$$\nThus $(\\star)$ is equivalent to\n$$\nu_y+u\\ u_x=g(y),\\qquad u(x,0)=f(x),\\qquad (\\star\\star)\n$$\nwhere $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p>\n\n<p><strong>Method of characteristics.</strong>\nPerhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form\n$$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$\nTo solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\\bigl(s,0,f(s)\\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\\bigl(s,0,f(s)\\bigr)$.</p>\n\n<p>The characteristic equations are then\n$$\n\\frac{dx}{dt}=a(x,y,z),\\quad \\frac{dy}{dt}=b(x,y,z),\\quad \\frac{dz}{dt}=c(x,y,z).\n$$\nReturning to our equation $(\\star\\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$.\nThus \n$$\n\\frac{dx}{dt}=z,\\quad \\frac{dy}{dt}=1,\\quad \\frac{dz}{dt}=g(y)\n$$\nwith initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p>\n\n<p>The solution to this system is\n$$\nx=s+zt,\\quad y=t,\\quad z=f(s)+h(t),\n$$\nwhere $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p>\n\n<p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation\n$$\n\\boxed{u=f(x-uy)+h(y)},\n$$\nwhere $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\\star)$.</p>\n\n<p><strong>TL;DR.</strong> Change the $\\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p>\n" }, { "answer_id": 2110466, "score": -3, "text": "<p>Not so sure:</p>\n\n<p>$u_{xt} + uu_{xx} + \\frac{1}{2}u_x^2 = 0$</p>\n\n<p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p>\n\n<p>Let's define</p>\n\n<p>$u = x t$</p>\n\n<p>Then:</p>\n\n<p>$u_{x} = x_{x} t$</p>\n\n<p>$u_{t} = x t_{t}$</p>\n\n<p>$u_{xt} = x_{x} t_{t}$</p>\n\n<p>$u_{xx} = x_{xx} t$</p>\n\n<p>Now:</p>\n\n<p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p>\n\n<p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p>\n\n<p>We divide by $x_{x} t^2$:</p>\n\n<p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p>\n\n<p>Please, verify.</p>\n" } ]
Question: How can I solve $u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$ with the method of characteristics. Answer 1: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 2: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 3: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 4: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 5: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 6: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 7: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 8: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 9: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 10: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 11: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 12: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 13: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 14: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 15: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 16: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 17: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 18: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 19: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 20: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 21: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 22: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 23: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 24: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 25: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 26: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 27: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 28: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 29: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 30: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 31: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 32: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p> Answer 33: <p><strong>Modifying the problem.</strong> Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally.</p> <ol> <li>The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$.</li> <li>The variable $t$ has been renamed to $y$.</li> <li>The initial function $u_0(x)$ has been renamed to $f(x)$.</li> </ol> <p>Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question.</p> <p>Note: I will not delve into regularity of the solutions in this answer.</p> <p><strong>Reduction to a first order quasilinear PDE.</strong> Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity).</p> <p><strong>Method of characteristics.</strong> Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$.</p> <p>The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$.</p> <p>The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$.</p> <p><strong>The solution.</strong> Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$.</p> <p><strong>TL;DR.</strong> Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.</p> Answer 34: <p>Not so sure:</p> <p>$u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$</p> <p>$2u_{xt} + 2uu_{xx} + u_x^2 = 0$</p> <p>Let's define</p> <p>$u = x t$</p> <p>Then:</p> <p>$u_{x} = x_{x} t$</p> <p>$u_{t} = x t_{t}$</p> <p>$u_{xt} = x_{x} t_{t}$</p> <p>$u_{xx} = x_{xx} t$</p> <p>Now:</p> <p>$2 u_{xt} + 2u u_{xx} + u_x^2 = 0$</p> <p>$2 x_{x} t_{t} + 2 x t x_{xx} t + x_{x}^2 t^2 = 0$</p> <p>We divide by $x_{x} t^2$:</p> <p>$2 t_{t} / t^2 + 2 x x_{xx} / x_{x} + x_{x} = 0$</p> <p>Please, verify.</p>
Showing $A-I$ is invertible, when $A$ is a skew-symmetric matrix
2,110,877
9
https://math.stackexchange.com/questions/2110877/showing-a-i-is-invertible-when-a-is-a-skew-symmetric-matrix
<p>Let $A\in M_{n\times n}(\mathbb{R})$ be a skew-symmetric matrix. show that $A-I$ is invertible and $(A-I)^{-1}(A+I)$ is an orthogonal matrix. </p> <p>$|A-I|=|(A-I)^T|=|-(A+I)|=(-1)^n|A+I|$</p> <p>I have no clue to solve this problem, I appreciate any help.</p>
[ { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" }, { "answer_id": 2110898, "score": 16, "text": "<p>If <span class=\"math-container\">$A - I$</span> is not invertible, we can find a non-zero vector <span class=\"math-container\">$x \\in \\ker(A - I)$</span> which will satisfy <span class=\"math-container\">$Ax = x$</span>. Calculating, we have</p>\n<p><span class=\"math-container\">$$ \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n} = \\left&lt; Ax, x \\right&gt;_{\\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \\left&lt; x, x \\right&gt;_{\\mathbb{R}^n}, $$</span></p>\n<p>a contradiction. To see that <span class=\"math-container\">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p>\n<p><span class=\"math-container\">$$ B B^T = (A - I)^{-1}(A + I) \\left( (A - I)^{-1}(A + I) \\right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\\\\n= (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1}\n\\\\\n= (-1)(-1)I = I.$$</span></p>\n" }, { "answer_id": 2110899, "score": 5, "text": "<p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\\implies 2x^Tx=0\\implies ||x||^2=0 \\implies x=0 $$</p>\n\n<p>This means $A-I$ is invertible.</p>\n" }, { "answer_id": 2110903, "score": 3, "text": "<p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p>\n\n<p>If $Av=v$, then \n$$\nv^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv\n$$\nwhich implies $v=0$.</p>\n\n<p>Similarly, $-1$ is not an eigenvalue of $A$.</p>\n\n<p>Next\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1}\n$$\nand the product\n$$\n\\bigl((A-I)^{-1}(A+I)\\bigr)\\bigl((A-I)^{-1}(A+I)\\bigr)^T\n$$\nis the identity if and only if\n$$\n(A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I\n$$</p>\n\n<p>Can you finish?</p>\n" }, { "answer_id": 2110991, "score": 3, "text": "<p>Here is a slightly different approach:</p>\n\n<p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as\nHermitian matrices). Suppose $Av= \\lambda v$ for a unit vector $v$, then $v^*Av = \\lambda = -v^* A^* v = - \\overline{\\lambda} $, hence all eigenvalues are\npurely imaginary. In particular, they are not equal to one, hence\n$A-I$ is invertible.</p>\n\n<p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show\nthat $\\|(A-I)^{-1}(A+I) v \\| = 1$ for all unit eigenvectors $v$. Since\n$(A-I)^{-1}(A+I) v = {\\lambda +1 \\over \\lambda -1 }v$ and\n$|{\\lambda +1 \\over \\lambda -1 }| = 1$, we have the desired result.</p>\n" } ]
Question: Showing $A-I$ is invertible, when $A$ is a skew-symmetric matrix Answer 1: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 2: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 3: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 4: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 5: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 6: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 7: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 8: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 9: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 10: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 11: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 12: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 13: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 14: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 15: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 16: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 17: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 18: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 19: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 20: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 21: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 22: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 23: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 24: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 25: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 26: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 27: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 28: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 29: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 30: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 31: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 32: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 33: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 34: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 35: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 36: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 37: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 38: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 39: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 40: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 41: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 42: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 43: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 44: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 45: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 46: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 47: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 48: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 49: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 50: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 51: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 52: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 53: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 54: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 55: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 56: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 57: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 58: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 59: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 60: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 61: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 62: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 63: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 64: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p> Answer 65: <p>If <span class="math-container">$A - I$</span> is not invertible, we can find a non-zero vector <span class="math-container">$x \in \ker(A - I)$</span> which will satisfy <span class="math-container">$Ax = x$</span>. Calculating, we have</p> <p><span class="math-container">$$ \left&lt; x, x \right&gt;_{\mathbb{R}^n} = \left&lt; Ax, x \right&gt;_{\mathbb{R}^n} = (Ax)^T x = x^T A^T x = -x^T A x = -x^T x = - \left&lt; x, x \right&gt;_{\mathbb{R}^n}, $$</span></p> <p>a contradiction. To see that <span class="math-container">$B := (A - I)^{-1}(A + I)$</span> is orthogonal, we can calculate</p> <p><span class="math-container">$$ B B^T = (A - I)^{-1}(A + I) \left( (A - I)^{-1}(A + I) \right)^T = (A - I)^{-1} (A + I) (A^T + I)(A^T - I)^{-1} \\ = (A - I)^{-1}(A + I)(-A + I)(-A - I)^{-1} = (-1)(I - A)^{-1}(I - A)(I + A)(-1)(A + I)^{-1} \\ = (-1)(-1)I = I.$$</span></p> Answer 66: <p>If $(A-I)x=0$ then $Ax=x$. Note that $x^TA^T=x^T$, hence $-x^TA=x^T$. After multiplying both sides by $x$ you get $$-x^T(Ax)=x^Tx\implies 2x^Tx=0\implies ||x||^2=0 \implies x=0 $$</p> <p>This means $A-I$ is invertible.</p> Answer 67: <p>Can $1$ be an eigenvalue of a skew-symmetric matrix?</p> <p>If $Av=v$, then $$ v^Tv=v^TAv=(A^Tv)^Tv=(-Av)^Tv=-v^Tv $$ which implies $v=0$.</p> <p>Similarly, $-1$ is not an eigenvalue of $A$.</p> <p>Next $$ \bigl((A-I)^{-1}(A+I)\bigr)^T=(A^T+I)(A^T-I)^{-1}=(A-I)(A+I)^{-1} $$ and the product $$ \bigl((A-I)^{-1}(A+I)\bigr)\bigl((A-I)^{-1}(A+I)\bigr)^T $$ is the identity if and only if $$ (A-I)^{-1}(A+I)(A-I)(A+I)^{-1}=I $$</p> <p>Can you finish?</p> Answer 68: <p>Here is a slightly different approach:</p> <p>Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible.</p> <p>To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.</p>
Ask about beautiful properties of $e$
2,110,919
9
https://math.stackexchange.com/questions/2110919/ask-about-beautiful-properties-of-e
<p>One of students asked me about "some beautiful properties (or relation) of $e$". Then I list like below \begin{align} &amp; e \equiv \lim_{x \to \infty} \left(1+\frac{1}{x} \right)^x\\[10pt] &amp; e = \sum_{k=0}^\infty \frac{1}{k!}\\[10pt] &amp; \frac{d}{dx} (e^x) = e^x\\[10pt] &amp; e^{ix} = \cos x + i \sin x \quad \text{(Euler)}\\[10pt] &amp; e^{i \pi} + 1 = 0 \end{align} After this, he asked me for more relation or properties. I said I'll think and answer ...</p> <p>Now I want help to add some relation, properties, or visual things (like proof without words)</p> <p>Please help me to add something more. Thanks in advance.</p> <p>***The class was math. 1. engineering</p>
[ { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" }, { "answer_id": 2110928, "score": 14, "text": "<p>One of my favourites is the following property of $e$.</p>\n\n<p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p>\n\n<p>The expected number of such random numbers is $e$.</p>\n" }, { "answer_id": 2110949, "score": 4, "text": "<p>My favourite is about derangements.</p>\n\n<p>$n$ people write their name on different envelopes and put the envelopes in a box.\nIf they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\\approx\\frac{1}{e}$.</p>\n\n<hr>\n\n<p>A number-theoretic one: if $\\{p_k\\}_{k\\geq 1}$ is the sequence of primes,\n$$ \\prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p>\n" }, { "answer_id": 2114103, "score": 4, "text": "<p>Here are some more relations which might be pleasing.</p>\n\n<blockquote>\n <p>From section 1.3 of <em><a href=\"http://rads.stackoverflow.com/amzn/click/0521818052\" rel=\"nofollow noreferrer\">Mathematical Constants</a></em> by S.R. Finch:</p>\n \n <ul>\n <li><p>A Wallis-like infinite product is\n \\begin{align*}\ne=\\frac{2}{1}\\cdot\\left(\\frac{4}{3}\\right)^{\\frac{1}{2}}\n\\cdot\\left(\\frac{6\\cdot 8}{5\\cdot 7}\\right)^{\\frac{1}{4}}\n\\cdot\\left(\\frac{10\\cdot 12\\cdot 14\\cdot 16}{9\\cdot 11\\cdot 13\\cdot 15}\\right)^{\\frac{1}{8}}\\cdots\n\\end{align*}</p></li>\n <li><p>From Stirling's formula we derive\n \\begin{align*}\ne=\\lim_{n\\rightarrow \\infty}\\frac{n}{(n!)^{\\frac{1}{n}}}\n\\end{align*}</p></li>\n <li><p>Another continued fraction representation is\n \\begin{align*}\ne&amp;=2+\\frac{\\left.2\\right|}{\\left|2\\right.}\n+\\frac{\\left.3\\right|}{\\left|3\\right.}\n+\\frac{\\left.4\\right|}{\\left|4\\right.}\n+\\frac{\\left.5\\right|}{\\left|5\\right.}+\\cdots\\\\\n&amp;=2+\\frac{2}{2+\\frac{3}{3+\\frac{4}{4+\\frac{5}{5+\\cdots}}}}\n\\end{align*}</p></li>\n </ul>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>In section <em>Intriguing Results</em> in <em><a href=\"http://rads.stackoverflow.com/amzn/click/0883857456\" rel=\"nofollow noreferrer\">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p>\n \n <ul>\n <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li>\n </ul>\n \n <p>\\begin{align*}\ne=\\frac{1}{5}\\left(\\frac{1^2}{0!}+\\frac{2^2}{1!}+\\frac{3^2}{2!}+\\frac{4^2}{3!}+\\cdots\\right)\n\\end{align*}</p>\n</blockquote>\n" }, { "answer_id": 2110942, "score": 3, "text": "<p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p>\n\n<p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p>\n\n<p>$$ \n\\begin{array}{ccccccccccccccccccccccc}\n &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\\\\n\\frac{0}{1} &amp; \\frac{1}{0} &amp; &amp; \\frac{1}{1} &amp; &amp; \\frac{2}{1} &amp; &amp; \\frac{5}{3} &amp; &amp; \\frac{7}{4} &amp; &amp; \\frac{12}{7} &amp; &amp; \\frac{55}{32} &amp; &amp; \\frac{67}{39} &amp; &amp; \\frac{122}{71} &amp; &amp; \\frac{799}{465} \\\\\n\\end{array}\n$$</p>\n" }, { "answer_id": 2110962, "score": 3, "text": "<p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p>\n\n<p>$5\\times5\\times5\\times5=625$</p>\n\n<p>The question here is to find the number $n\\in \\mathrm{N}$ such that is maximizes </p>\n\n<p>$r = \\arg \\max_{n,m} \\prod_{i=1}^{n}m$</p>\n\n<p>where</p>\n\n<p>$m=(\\mathrm{your\\:choice\\:of\\:number})/n$</p>\n\n<p>We can show that the number $r$ is maximized when $m\\approx e$. In the case of choosing $20$ we have</p>\n\n<p>$20/7\\approx e$</p>\n\n<p>so </p>\n\n<p>$r=(2.8)^7$</p>\n" } ]
Question: Ask about beautiful properties of $e$ Answer 1: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 2: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 3: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 4: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 5: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 6: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 7: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 8: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 9: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 10: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 11: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 12: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 13: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 14: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 15: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 16: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 17: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 18: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 19: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 20: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 21: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 22: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 23: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 24: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 25: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 26: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 27: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 28: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 29: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 30: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 31: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 32: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 33: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 34: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 35: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 36: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 37: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 38: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 39: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 40: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 41: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 42: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 43: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 44: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 45: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 46: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 47: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 48: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 49: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 50: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 51: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 52: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 53: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 54: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 55: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 56: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 57: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 58: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 59: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 60: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 61: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 62: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 63: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 64: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 65: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 66: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 67: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 68: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 69: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 70: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 71: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 72: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 73: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 74: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 75: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 76: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 77: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 78: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 79: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 80: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p> Answer 81: <p>One of my favourites is the following property of $e$.</p> <p>Write down a random number between $0$ and $1$. Write down another one and add it to the previous one. If the total exceeds $1$, stop. Otherwise keep adding such random numbers until the total exceeds $1$ and stop.</p> <p>The expected number of such random numbers is $e$.</p> Answer 82: <p>My favourite is about derangements.</p> <p>$n$ people write their name on different envelopes and put the envelopes in a box. If they pick a random envelope back from the box, the probability that no one picks an envelope with its name is $\approx\frac{1}{e}$.</p> <hr> <p>A number-theoretic one: if $\{p_k\}_{k\geq 1}$ is the sequence of primes, $$ \prod_{k=1}^{n} p_k = e^{n+o(n)}.$$</p> Answer 83: <p>Here are some more relations which might be pleasing.</p> <blockquote> <p>From section 1.3 of <em><a href="http://rads.stackoverflow.com/amzn/click/0521818052" rel="nofollow noreferrer">Mathematical Constants</a></em> by S.R. Finch:</p> <ul> <li><p>A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*}</p></li> <li><p>From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*}</p></li> <li><p>Another continued fraction representation is \begin{align*} e&amp;=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &amp;=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*}</p></li> </ul> </blockquote> <p>$$ $$</p> <blockquote> <p>In section <em>Intriguing Results</em> in <em><a href="http://rads.stackoverflow.com/amzn/click/0883857456" rel="nofollow noreferrer">Real Infinite Series</a></em> by D.D. Bonar and M.J. Khoury we find</p> <ul> <li>Gem 89 (American Math Monthly 42:2 pp. 111-112)</li> </ul> <p>\begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}</p> </blockquote> Answer 84: <p>Here is the (simple) continued fraction for $e-1.$ The pattern continues forever, 1,1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,...</p> <p>To get $e$ itself add one, this is the same as changing the first $1$ to a $2$</p> <p>$$ \begin{array}{ccccccccccccccccccccccc} &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 2 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 4 &amp; &amp; 1 &amp; &amp; 1 &amp; &amp; 6 &amp; \\ \frac{0}{1} &amp; \frac{1}{0} &amp; &amp; \frac{1}{1} &amp; &amp; \frac{2}{1} &amp; &amp; \frac{5}{3} &amp; &amp; \frac{7}{4} &amp; &amp; \frac{12}{7} &amp; &amp; \frac{55}{32} &amp; &amp; \frac{67}{39} &amp; &amp; \frac{122}{71} &amp; &amp; \frac{799}{465} \\ \end{array} $$</p> Answer 85: <p>Let's say that a number is cut into equal parts and then multiply those parts together, for example take number $20$ and divide it by $4$ ($20/4=5$), as result we have:</p> <p>$5\times5\times5\times5=625$</p> <p>The question here is to find the number $n\in \mathrm{N}$ such that is maximizes </p> <p>$r = \arg \max_{n,m} \prod_{i=1}^{n}m$</p> <p>where</p> <p>$m=(\mathrm{your\:choice\:of\:number})/n$</p> <p>We can show that the number $r$ is maximized when $m\approx e$. In the case of choosing $20$ we have</p> <p>$20/7\approx e$</p> <p>so </p> <p>$r=(2.8)^7$</p>
Prove compact subsets of metric spaces are closed
2,111,662
9
https://math.stackexchange.com/questions/2111662/prove-compact-subsets-of-metric-spaces-are-closed
<blockquote> <p>Prove compact subsets of metric spaces are closed</p> </blockquote> <hr> <p>Note, this question is more of analyzing an incorrect proof of mine rather than supplying a correct proof. </p> <hr> <p><em>My Attempted Proof</em></p> <p>Suppose $X$ is a metric space. Let $A \subset X$ be a compact subset of $X$ and let $\{V_{\alpha}\}$ be an open cover of $A$. Then there are finitely many indices $\alpha_{i}$ such that $A \subset V_{\alpha_{1}} \cup \ ... \ \cup V_{\alpha_{n}}$.</p> <p>Now let $x$ be a limit point of $A$. Assume $x \not\in A$. If $x \not\in A$ put $\delta = \inf \ \{\ d(x, y) \ | \ y \in A\}$. Take $\epsilon = \frac{\delta}{2}$, then $B_d(x, \epsilon) \cap A = \emptyset$ so that a neighbourhood of $x$ does not intersect $A$ asserting that $x$ cannot be a limit point of $A$, hence $x \in A$ so that $A$ is closed. $\square$.</p> <hr> <p>Now there must be something critically wrong in my proof, as I don't even use the condition that $A$ is compact anywhere in the contradiction that I establish. The above proof would assert that <em>every subset of a metric space is closed</em>.</p> <p>I think my error must be in the following argument : $\delta = \inf \ \{\ d(x, y) \ | \ y \in A\}$. For if we take $X = \mathbb{R}$ and $A = (0, 1) \subset \mathbb{R}$, then $\delta = 0$ if $x = 1$ or $x = 0$.</p> <p>Am I correct in analyzing this aspect of my proof?</p>
[ { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" }, { "answer_id": 2112007, "score": 19, "text": "<p>Suppose <span class=\"math-container\">$A$</span> is compact. Then let <span class=\"math-container\">$x$</span> be in <span class=\"math-container\">$X \\setminus A$</span>. We must find a neighbourhood of <span class=\"math-container\">$x$</span> that is disjoint from <span class=\"math-container\">$A$</span> to show closedness of <span class=\"math-container\">$A$</span>.</p>\n\n<p>For every <span class=\"math-container\">$a \\in A$</span> let <span class=\"math-container\">$U_a = B(a, \\frac{d(a,x)}{2})$</span> and <span class=\"math-container\">$V_a =B(x, \\frac{d(a,x)}{2})$</span>. Then <span class=\"math-container\">$U_a$</span> and <span class=\"math-container\">$V_a$</span> are disjoint open neighbourhoods of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p>\n\n<p>Then the <span class=\"math-container\">$U_a ,a \\in A$</span> together form a cover of <span class=\"math-container\">$A$</span>, and now we use compactness of <span class=\"math-container\">$A$</span> to get <em>finitely many</em> neighborhoods <span class=\"math-container\">$U_{a_1}, \\ldots, U_{a_n}$</span>, that also cover <span class=\"math-container\">$A$</span>. But then <span class=\"math-container\">$V_{a_1} \\cap \\ldots V_{a_n}$</span> is an open neighbourhood of <span class=\"math-container\">$x$</span> that misses <span class=\"math-container\">$A$</span> (since a point in <span class=\"math-container\">$A$</span> cannot be in all of <span class=\"math-container\">$V_{a_1}, \\ldots, V_{a_n}$</span> because it belongs to some <span class=\"math-container\">$U_{a_i}$</span> which is disjoint with <span class=\"math-container\">$V_{a_i}$</span>.).</p>\n\n<p>So <span class=\"math-container\">$A$</span> is closed.</p>\n\n<p>An alternative with sequences: suppose <span class=\"math-container\">$x \\in \\overline{A}$</span>. Then being a point in the closure we find a sequence in <span class=\"math-container\">$A$</span>, <span class=\"math-container\">$(a_n)$</span> that converges to <span class=\"math-container\">$x$</span>. The sequence <span class=\"math-container\">$(a_n)$</span> from <span class=\"math-container\">$A$</span> has a convergent subsequence in <span class=\"math-container\">$A$</span> by compactness of <span class=\"math-container\">$A$</span>, so there is some <span class=\"math-container\">$a \\in A$</span> and some subsequence <span class=\"math-container\">$a_{n_k} \\rightarrow a$</span>. But also <span class=\"math-container\">$a_{n_k} \\rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class=\"math-container\">$a =x$</span>, but then <span class=\"math-container\">$x \\in A$</span> as required: <span class=\"math-container\">$\\overline{A} = A$</span>. </p>\n" }, { "answer_id": 2111710, "score": 2, "text": "<p>In a metric space we have:</p>\n\n<p>$A$ is compact iff $A$ is sequentially compact.</p>\n\n<p>Use this property to show that $A$ is closed.</p>\n\n<p>To your proof: if for $x \\in X \\setminus A$ we define $ \\delta(x) = \\inf \\ \\{\\ d(x, y) \\ | \\ y \\in A\\}$, then it holds that</p>\n\n<p>$(*)$ $\\quad$ $A$ is closed iff $\\delta(x)&gt;0$ for all $x \\in X \\setminus A$ !</p>\n\n<p>Try a proof of $(*)$.</p>\n" }, { "answer_id": 2982602, "score": 0, "text": "<p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>\n" } ]
Question: Prove compact subsets of metric spaces are closed Answer 1: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 2: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 3: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 4: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 5: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 6: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 7: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 8: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 9: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 10: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 11: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 12: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 13: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 14: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 15: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 16: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 17: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 18: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 19: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 20: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 21: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 22: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 23: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 24: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 25: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 26: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 27: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 28: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 29: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 30: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 31: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 32: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 33: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 34: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 35: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 36: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 37: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 38: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 39: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 40: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 41: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 42: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 43: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 44: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 45: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 46: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 47: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 48: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p> Answer 49: <p>Suppose <span class="math-container">$A$</span> is compact. Then let <span class="math-container">$x$</span> be in <span class="math-container">$X \setminus A$</span>. We must find a neighbourhood of <span class="math-container">$x$</span> that is disjoint from <span class="math-container">$A$</span> to show closedness of <span class="math-container">$A$</span>.</p> <p>For every <span class="math-container">$a \in A$</span> let <span class="math-container">$U_a = B(a, \frac{d(a,x)}{2})$</span> and <span class="math-container">$V_a =B(x, \frac{d(a,x)}{2})$</span>. Then <span class="math-container">$U_a$</span> and <span class="math-container">$V_a$</span> are disjoint open neighbourhoods of <span class="math-container">$a$</span> and <span class="math-container">$x$</span> respectively (disjoint follows from the triangle inequality, check this).</p> <p>Then the <span class="math-container">$U_a ,a \in A$</span> together form a cover of <span class="math-container">$A$</span>, and now we use compactness of <span class="math-container">$A$</span> to get <em>finitely many</em> neighborhoods <span class="math-container">$U_{a_1}, \ldots, U_{a_n}$</span>, that also cover <span class="math-container">$A$</span>. But then <span class="math-container">$V_{a_1} \cap \ldots V_{a_n}$</span> is an open neighbourhood of <span class="math-container">$x$</span> that misses <span class="math-container">$A$</span> (since a point in <span class="math-container">$A$</span> cannot be in all of <span class="math-container">$V_{a_1}, \ldots, V_{a_n}$</span> because it belongs to some <span class="math-container">$U_{a_i}$</span> which is disjoint with <span class="math-container">$V_{a_i}$</span>.).</p> <p>So <span class="math-container">$A$</span> is closed.</p> <p>An alternative with sequences: suppose <span class="math-container">$x \in \overline{A}$</span>. Then being a point in the closure we find a sequence in <span class="math-container">$A$</span>, <span class="math-container">$(a_n)$</span> that converges to <span class="math-container">$x$</span>. The sequence <span class="math-container">$(a_n)$</span> from <span class="math-container">$A$</span> has a convergent subsequence in <span class="math-container">$A$</span> by compactness of <span class="math-container">$A$</span>, so there is some <span class="math-container">$a \in A$</span> and some subsequence <span class="math-container">$a_{n_k} \rightarrow a$</span>. But also <span class="math-container">$a_{n_k} \rightarrow x$</span>, and as limits of sequences in metric spaces are unique: <span class="math-container">$a =x$</span>, but then <span class="math-container">$x \in A$</span> as required: <span class="math-container">$\overline{A} = A$</span>. </p> Answer 50: <p>In a metric space we have:</p> <p>$A$ is compact iff $A$ is sequentially compact.</p> <p>Use this property to show that $A$ is closed.</p> <p>To your proof: if for $x \in X \setminus A$ we define $ \delta(x) = \inf \ \{\ d(x, y) \ | \ y \in A\}$, then it holds that</p> <p>$(*)$ $\quad$ $A$ is closed iff $\delta(x)&gt;0$ for all $x \in X \setminus A$ !</p> <p>Try a proof of $(*)$.</p> Answer 51: <p>The fact that you're taking an open ball around x of radius epsilon and that ball doesn't intersect with A is fundamentally incorrect if x is a limit point of A. Thus, your assumption that delta, and effectively epsilon, is greater than 0 is wrong. </p>
When does intersection commute with tensor product
2,112,594
9
https://math.stackexchange.com/questions/2112594/when-does-intersection-commute-with-tensor-product
<p>Given two submodules $U,V \subseteq M$ over a (commutative) ring $R$, and a flat $R$-module $A$, I can interpret $U \otimes_R A$ and $V \otimes_R A$ as submodules of $M \otimes_R A$. Is it necessarily true that $$(U \cap V) \otimes_R A \cong (U \otimes_R A) \cap (V \otimes_R A) ?$$</p> <p>I think it should be true in many cases, with intuition coming from $\mathbb{Z}$-modules and $A = \mathbb{Q}$, but I'm unsure about what happens in general.</p>
[ { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" }, { "answer_id": 2299480, "score": 7, "text": "<p>(My original answer contained a significant error, which is what the comments below refer to.)</p>\n\n<p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p>\n\n<p><strong>Lemma.</strong> Let <span class=\"math-container\">$M_i$</span> , <span class=\"math-container\">$i \\in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class=\"math-container\">$$\\phi : (\\prod M_i) \\otimes N \\to \\prod ( M_i \\otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class=\"math-container\">$\\prod (M_i \\otimes N)$</span>, and <span class=\"math-container\">$\\_ \\otimes N$</span> applied to the maps <span class=\"math-container\">$\\prod M_i \\to M_i$</span>.)</p>\n\n<p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p>\n\n<p>If <span class=\"math-container\">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p>\n\n<p>We can use user26857s example to produce an example when <span class=\"math-container\">$\\phi$</span> is not injective: <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x) \\to \\prod_{n \\geq 0} (K[x]/x^n \\otimes K(x)) = 0$</span>. The LHS is not <span class=\"math-container\">$0$</span> because the map <span class=\"math-container\">$K[x] \\to \\prod_{n \\geq 0} K[x]/x^n$</span> is injective, so because <span class=\"math-container\">$K(x)$</span> is flat, <span class=\"math-container\">$K(x)$</span> embeds as a submodule of <span class=\"math-container\">$(\\prod_{n \\geq 0} K[x]/x^n) \\otimes K(x)$</span>.</p>\n\n<p>An example when <span class=\"math-container\">$\\phi$</span> is not surjective is <span class=\"math-container\">$(\\prod_{\\mathbb{N}} \\mathbb{Z}) \\otimes \\mathbb{Q} \\to \\prod_{\\mathbb{N}} \\mathbb{Q}$</span>. The element <span class=\"math-container\">$(1,1/2,1/4,1/8, \\ldots)$</span> is not in the image.</p>\n\n<p><strong>Proposition.</strong> Lei <span class=\"math-container\">$I$</span> be finite again. Suppose <span class=\"math-container\">$M_i$</span> are submodules of <span class=\"math-container\">$M$</span>. We identify each <span class=\"math-container\">$M_i \\otimes N$</span> with a submodule of <span class=\"math-container\">$M \\otimes N$</span>, using that <span class=\"math-container\">$N$</span> is flat. We also identify <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> with a submodule of <span class=\"math-container\">$N$</span> in the same way. Then <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N) = (\\bigcap_{i \\in I} M_i) \\otimes N$</span>, as submodules of <span class=\"math-container\">$M$</span>.</p>\n\n<p><em>Proof.</em> </p>\n\n<p>Consider the exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to \\bigcap_{i \\in I} M_i \\to M \\to \\prod_{i \\in I} M / M_i.$$</span> </p>\n\n<p>Using flatness of <span class=\"math-container\">$N$</span>, we get another exact sequence:</p>\n\n<p><span class=\"math-container\">$$0 \\to (\\bigcap_{i \\in I} M_i) \\otimes N \\to M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N\\qquad (**)$$</span></p>\n\n<p>We study the map <span class=\"math-container\">$f: M \\otimes N \\to (\\prod_{i \\in I} M / M_i) \\otimes N$</span> by composing it with <span class=\"math-container\">$\\phi: (\\prod_{i \\in I} M / M_i) \\otimes N \\to \\prod_{i \\in I} ((M / M_i) \\otimes N)$</span> from the lemma above.</p>\n\n<p>Claims: </p>\n\n<p>1) The composition <span class=\"math-container\">$\\phi \\circ f$</span> is the natural projection map. Hence the kernel of <span class=\"math-container\">$\\phi \\circ f$</span> is <span class=\"math-container\">$\\bigcap_{i \\in I} (M_i \\otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p>\n\n<p>2) <span class=\"math-container\">$\\ker(\\phi \\circ f) = \\ker(f)$</span>. This is because <span class=\"math-container\">$\\phi$</span> is injective, as we saw in the lemma.</p>\n\n<p>Now it follows from exactness of the sequence <span class=\"math-container\">$(**)$</span> above that, as submodules, <span class=\"math-container\">$(\\bigcap_{i \\in I} M_i) \\otimes N$</span> and <span class=\"math-container\">$\\ker f$</span> agree. This proves the claim.</p>\n" }, { "answer_id": 2449611, "score": 6, "text": "<p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>\n" } ]
Question: When does intersection commute with tensor product Answer 1: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 2: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 3: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 4: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 5: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 6: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 7: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 8: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 9: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 10: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 11: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 12: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 13: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 14: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 15: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 16: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 17: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 18: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 19: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 20: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 21: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 22: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 23: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 24: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 25: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 26: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 27: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 28: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 29: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 30: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 31: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 32: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p> Answer 33: <p>(My original answer contained a significant error, which is what the comments below refer to.)</p> <p>Yes, it is true. However, it is not true if the collection of modules is infinite. User26857 gives an example in the comments.</p> <p><strong>Lemma.</strong> Let <span class="math-container">$M_i$</span> , <span class="math-container">$i \in I$</span>, be a <em>finite</em> collection of modules. Then the natural map <span class="math-container">$$\phi : (\prod M_i) \otimes N \to \prod ( M_i \otimes N)$$</span> is an isomorphism. (The natural map is the one induced by the universal property of <span class="math-container">$\prod (M_i \otimes N)$</span>, and <span class="math-container">$\_ \otimes N$</span> applied to the maps <span class="math-container">$\prod M_i \to M_i$</span>.)</p> <p><em>Proof.</em> This is because finite products are finite coproducts, and tensor products commute with coproducts.</p> <p>If <span class="math-container">$I$</span> is infinite, then this map is neither injective nor surjective in general.</p> <p>We can use user26857s example to produce an example when <span class="math-container">$\phi$</span> is not injective: <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x) \to \prod_{n \geq 0} (K[x]/x^n \otimes K(x)) = 0$</span>. The LHS is not <span class="math-container">$0$</span> because the map <span class="math-container">$K[x] \to \prod_{n \geq 0} K[x]/x^n$</span> is injective, so because <span class="math-container">$K(x)$</span> is flat, <span class="math-container">$K(x)$</span> embeds as a submodule of <span class="math-container">$(\prod_{n \geq 0} K[x]/x^n) \otimes K(x)$</span>.</p> <p>An example when <span class="math-container">$\phi$</span> is not surjective is <span class="math-container">$(\prod_{\mathbb{N}} \mathbb{Z}) \otimes \mathbb{Q} \to \prod_{\mathbb{N}} \mathbb{Q}$</span>. The element <span class="math-container">$(1,1/2,1/4,1/8, \ldots)$</span> is not in the image.</p> <p><strong>Proposition.</strong> Lei <span class="math-container">$I$</span> be finite again. Suppose <span class="math-container">$M_i$</span> are submodules of <span class="math-container">$M$</span>. We identify each <span class="math-container">$M_i \otimes N$</span> with a submodule of <span class="math-container">$M \otimes N$</span>, using that <span class="math-container">$N$</span> is flat. We also identify <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> with a submodule of <span class="math-container">$N$</span> in the same way. Then <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N) = (\bigcap_{i \in I} M_i) \otimes N$</span>, as submodules of <span class="math-container">$M$</span>.</p> <p><em>Proof.</em> </p> <p>Consider the exact sequence:</p> <p><span class="math-container">$$0 \to \bigcap_{i \in I} M_i \to M \to \prod_{i \in I} M / M_i.$$</span> </p> <p>Using flatness of <span class="math-container">$N$</span>, we get another exact sequence:</p> <p><span class="math-container">$$0 \to (\bigcap_{i \in I} M_i) \otimes N \to M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N\qquad (**)$$</span></p> <p>We study the map <span class="math-container">$f: M \otimes N \to (\prod_{i \in I} M / M_i) \otimes N$</span> by composing it with <span class="math-container">$\phi: (\prod_{i \in I} M / M_i) \otimes N \to \prod_{i \in I} ((M / M_i) \otimes N)$</span> from the lemma above.</p> <p>Claims: </p> <p>1) The composition <span class="math-container">$\phi \circ f$</span> is the natural projection map. Hence the kernel of <span class="math-container">$\phi \circ f$</span> is <span class="math-container">$\bigcap_{i \in I} (M_i \otimes N)$</span>. (Lazy justification -- all maps are induced canonically...)</p> <p>2) <span class="math-container">$\ker(\phi \circ f) = \ker(f)$</span>. This is because <span class="math-container">$\phi$</span> is injective, as we saw in the lemma.</p> <p>Now it follows from exactness of the sequence <span class="math-container">$(**)$</span> above that, as submodules, <span class="math-container">$(\bigcap_{i \in I} M_i) \otimes N$</span> and <span class="math-container">$\ker f$</span> agree. This proves the claim.</p> Answer 34: <p>Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise). </p>
Analyzing $\biggl\lfloor{\frac{x}{5}}\bigg\rfloor=\bigg\lfloor{\frac{x}{7}}\bigg\rfloor$
2,113,526
9
https://math.stackexchange.com/questions/2113526/analyzing-biggl-lfloor-fracx5-bigg-rfloor-bigg-lfloor-fracx7-bigg
<blockquote> <p>How many non negative integral values of $x$ satisfy the equation :$$\biggl\lfloor{\dfrac{x}{5}}\bigg\rfloor=\bigg\lfloor{\dfrac{x}{7}}\bigg\rfloor$$.</p> </blockquote> <p>My try:</p> <p>Writing few numbers and putting in the required equation.</p> <p>$\underbrace{0,1,2,3,4}_{\implies 0=0},\underbrace{5,6}_{\displaystyle\times},\underbrace{7,8,9}_{\implies 1=1},\underbrace{10,11,12,13}_{\displaystyle\times} ,\underbrace{14}_{\implies 2=2},\underbrace{15,16,17,18,19,20}_{\text{LHS gives 3 but RHS gives 2}},\underbrace{21,22,23,24}_{\text{LHS gives 4 but RHS gives 3}}$.</p> <p>The point which I wish to make here is that after $14$ we'll get $5's$ multiple before than $7's$, hence $\forall x \geq 15 \implies \biggl\lfloor{\dfrac{x}{5}}\bigg\rfloor&gt;\bigg\lfloor{\dfrac{x}{7}}\bigg\rfloor$</p> <p>Hence the only solution are: $\{0,1,2,3,4,7,8,9,14\}$. Does these make sense?</p> <p>I need to know other solutions that don't incorporate such analyzing, means using some properties and framing it to solve above.</p>
[ { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" }, { "answer_id": 2113557, "score": 7, "text": "<p>Rewrite the given condition as $$5n\\le x&lt;5n+5,\\quad 7n\\le x&lt;7n+7$$ where $n$ is an integer. Since $x\\ge0$, $n\\ge0$ as well, so the above is equivalent to $7n\\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p>\n" }, { "answer_id": 2113564, "score": 2, "text": "<p>By the division rule,</p>\n\n<p>$$x=5q+r=7q+s$$ with $0\\le r&lt;5,0\\le s&lt;7$.</p>\n\n<p>From this we draw\n$$2q=r-s\\in[-6,4],$$</p>\n\n<p>thus the only possibilities are with $q=0,1,2$.</p>\n\n<ul>\n<li><p>$q=0$: $r=s$, possible for $x\\in[0,4]\\cap[0,6]=[0,4]$;</p></li>\n<li><p>$q=1$: $5+r=7+s$, possible for $x\\in[5,9]\\cap[7,13]=[7,9]$;</p></li>\n<li><p>$q=2$: $10+r=14+s$, possible for $x\\in[10,14]\\cap[14,20]=[14]$.</p></li>\n</ul>\n" }, { "answer_id": 2113556, "score": 1, "text": "<p>For a solution the LHS and RHS are a positive integer, let's call this $c$:\n$$c = \\left \\lfloor \\frac{x}{5} \\right \\rfloor = \\left \\lfloor \\frac{x}{7} \\right \\rfloor$$</p>\n\n<p>Then, there are integers $0 \\leq a &lt; 5, 0 \\leq b &lt; 7$, s.t.:\n$$x = 5c + a = 7c + b$$\nThus:\n$$a - b = 2c$$</p>\n\n<p>With the restrictions to $a,b$, this is enough to find all solutions.</p>\n" }, { "answer_id": 2113710, "score": 0, "text": "<p><strong>Analysis:</strong></p>\n\n<p>Let us take the general form where $p&gt;q$ :</p>\n\n<p>$$\n\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor\n$$</p>\n\n<p>Solution for above equation exists when $\\dfrac{x}{q}-\\dfrac{x}{p}&lt;1\\Rightarrow x&lt;\\dfrac{p\\cdot q}{p-q}$</p>\n\n<p>So there is no need to check for integers greater than$\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$</p>\n\n<p>Also $\\left\\lfloor{\\dfrac{x}{q}}\\right\\rfloor=\\left\\lfloor{\\dfrac{x}{p}}\\right\\rfloor=k$ implies\n$$\n\\begin{align}\n&amp;x\\in [k\\cdot p,(k+1)p) \\cap\\ [k\\cdot q,(k+1)\\cdot q)\\\\\n\\Rightarrow\\ \\ &amp;x\\in [k\\cdot p,(k+1)\\cdot q)\n\\end{align}\n$$</p>\n\n<p>The integers which satisfy the equation are in $$[0p,1q)\\ \\cup\\ [1p,2q)\\ \\cup\\ [2p,3q) \\cup\\ ...\\ \\cup\\ [(n-1)p,nq)$$ \nwhere $nq&lt;\\left\\lfloor\\dfrac{p\\cdot q}{p-q}\\right\\rfloor$ .</p>\n" }, { "answer_id": 4781426, "score": 0, "text": "<p><span class=\"math-container\">$$\\left[\\frac{{x}}{\\mathrm{5}}\\right]=\\left[\\frac{{x}}{\\mathrm{7}}\\right]={n}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2}\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{5}{n};\\mathrm{5}{n}+\\mathrm{5}\\right)\\cap\\left[\\mathrm{7}{n};\\mathrm{7}{n}+\\mathrm{7}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left[\\mathrm{7}{n};\\mathrm{5}{n}+\\mathrm{5}\\right) \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\left[\\mathrm{0};\\mathrm{5}\\right),\\left[\\mathrm{7};\\mathrm{10}\\right),\\left[\\mathrm{14};\\mathrm{15}\\right)\\right\\} \\\\ $$</span>\n<span class=\"math-container\">$${x}=\\left\\{\\mathrm{0},\\mathrm{1},\\mathrm{2},\\mathrm{3},\\mathrm{4},\\mathrm{7},\\mathrm{8},\\mathrm{9},\\mathrm{14}\\right\\} \\\\ $$</span></p>\n" } ]
Question: Analyzing $\biggl\lfloor{\frac{x}{5}}\bigg\rfloor=\bigg\lfloor{\frac{x}{7}}\bigg\rfloor$ Answer 1: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 2: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 3: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 4: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 5: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 6: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 7: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 8: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 9: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 10: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 11: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 12: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 13: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 14: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 15: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 16: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 17: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 18: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 19: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 20: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 21: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 22: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 23: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 24: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 25: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 26: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 27: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 28: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 29: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 30: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 31: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 32: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 33: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 34: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 35: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 36: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 37: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 38: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 39: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 40: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 41: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 42: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 43: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 44: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 45: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 46: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 47: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 48: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 49: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 50: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 51: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 52: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 53: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 54: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 55: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 56: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 57: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 58: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 59: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 60: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 61: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 62: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 63: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 64: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 65: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 66: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 67: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 68: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 69: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 70: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 71: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 72: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 73: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 74: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 75: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 76: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 77: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 78: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 79: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 80: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p> Answer 81: <p>Rewrite the given condition as $$5n\le x&lt;5n+5,\quad 7n\le x&lt;7n+7$$ where $n$ is an integer. Since $x\ge0$, $n\ge0$ as well, so the above is equivalent to $7n\le x&lt;5n+5$. The only nonnegative integers $n$ satisfying $7n&lt;5n+5$ are $0$, $1$, and $2$. And so just a little cleanup is needed.</p> Answer 82: <p>By the division rule,</p> <p>$$x=5q+r=7q+s$$ with $0\le r&lt;5,0\le s&lt;7$.</p> <p>From this we draw $$2q=r-s\in[-6,4],$$</p> <p>thus the only possibilities are with $q=0,1,2$.</p> <ul> <li><p>$q=0$: $r=s$, possible for $x\in[0,4]\cap[0,6]=[0,4]$;</p></li> <li><p>$q=1$: $5+r=7+s$, possible for $x\in[5,9]\cap[7,13]=[7,9]$;</p></li> <li><p>$q=2$: $10+r=14+s$, possible for $x\in[10,14]\cap[14,20]=[14]$.</p></li> </ul> Answer 83: <p>For a solution the LHS and RHS are a positive integer, let's call this $c$: $$c = \left \lfloor \frac{x}{5} \right \rfloor = \left \lfloor \frac{x}{7} \right \rfloor$$</p> <p>Then, there are integers $0 \leq a &lt; 5, 0 \leq b &lt; 7$, s.t.: $$x = 5c + a = 7c + b$$ Thus: $$a - b = 2c$$</p> <p>With the restrictions to $a,b$, this is enough to find all solutions.</p> Answer 84: <p><strong>Analysis:</strong></p> <p>Let us take the general form where $p&gt;q$ :</p> <p>$$ \left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor $$</p> <p>Solution for above equation exists when $\dfrac{x}{q}-\dfrac{x}{p}&lt;1\Rightarrow x&lt;\dfrac{p\cdot q}{p-q}$</p> <p>So there is no need to check for integers greater than$\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$</p> <p>Also $\left\lfloor{\dfrac{x}{q}}\right\rfloor=\left\lfloor{\dfrac{x}{p}}\right\rfloor=k$ implies $$ \begin{align} &amp;x\in [k\cdot p,(k+1)p) \cap\ [k\cdot q,(k+1)\cdot q)\\ \Rightarrow\ \ &amp;x\in [k\cdot p,(k+1)\cdot q) \end{align} $$</p> <p>The integers which satisfy the equation are in $$[0p,1q)\ \cup\ [1p,2q)\ \cup\ [2p,3q) \cup\ ...\ \cup\ [(n-1)p,nq)$$ where $nq&lt;\left\lfloor\dfrac{p\cdot q}{p-q}\right\rfloor$ .</p> Answer 85: <p><span class="math-container">$$\left[\frac{{x}}{\mathrm{5}}\right]=\left[\frac{{x}}{\mathrm{7}}\right]={n}=\left\{\mathrm{0},\mathrm{1},\mathrm{2}\right\} \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{5}{n};\mathrm{5}{n}+\mathrm{5}\right)\cap\left[\mathrm{7}{n};\mathrm{7}{n}+\mathrm{7}\right) \\ $$</span> <span class="math-container">$${x}=\left[\mathrm{7}{n};\mathrm{5}{n}+\mathrm{5}\right) \\ $$</span> <span class="math-container">$${x}=\left\{\left[\mathrm{0};\mathrm{5}\right),\left[\mathrm{7};\mathrm{10}\right),\left[\mathrm{14};\mathrm{15}\right)\right\} \\ $$</span> <span class="math-container">$${x}=\left\{\mathrm{0},\mathrm{1},\mathrm{2},\mathrm{3},\mathrm{4},\mathrm{7},\mathrm{8},\mathrm{9},\mathrm{14}\right\} \\ $$</span></p>
About the roots of cubic polynomial
2,114,064
9
https://math.stackexchange.com/questions/2114064/about-the-roots-of-cubic-polynomial
<p>Let $\alpha, \beta, \gamma$ be the complex roots of the polynomial $P_3(x)=ax^3+bx^2+cx+d$. </p> <p>Is there any known formula for calculating $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha \; , \; \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ (in terms of $a,b,c,d$)?</p> <p>If no, can someone obtain it?</p>
[ { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" }, { "answer_id": 2114140, "score": 23, "text": "<p>Let $u=\\alpha^2\\beta+\\beta^2\\gamma+\\gamma^2\\alpha$ and $v=\\alpha\\beta^2+\\beta\\gamma^2+\\gamma\\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\\alpha$, $\\beta$ and $\\gamma$ and the coefficients of $P_3$:\n$$\ne_1 = \\alpha+\\beta+\\gamma = -\\frac{b}{a} \\\\\ne_2 = \\alpha\\beta+\\beta\\gamma+\\gamma\\alpha = +\\frac{c}{a} \\\\\ne_3 = \\alpha\\beta\\gamma = -\\frac{d}{a}\n$$\nWe find\n\\begin{eqnarray*}\np &amp; := u+v = &amp; e_1e_2-3e_3 \\\\\nq &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2\n\\end{eqnarray*}\nFrom this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p>\n" }, { "answer_id": 2114117, "score": 12, "text": "<p>Let </p>\n\n<p>$$\\cases{E(\\alpha,\\beta,\\gamma):=\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha\\\\F(\\alpha,\\beta,\\gamma):=\n\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2}$$</p>\n\n<p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat \"bypass\" this issue):</p>\n\n<p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\\{0,1,-1\\}$.</p>\n\n<p>Expression $E$ can take two different values depending on the order in which the roots are taken\n:\n$$E(\\alpha,\\beta,\\gamma)=\\begin{cases}-1 \\ &amp; \\text{if} \\ \\ \\alpha=1,\\beta=-1,\\gamma=0\\\\ \\ \\ \\ 1 \\ &amp; \\text{if} \\ \\ \\alpha=-1,\\beta=1,\\gamma=0\\end{cases}.$$</p>\n\n<p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\\alpha, \\beta, \\gamma$, thus not invariant for certain permutations of the roots. </p>\n\n<p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\\alpha, \\beta, \\gamma.$ </p>\n\n<p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p>\n\n<ul>\n<li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\\alpha,\\beta,\\gamma)=\\Phi\\approx1.618$ is taken 3 times, and value $1-\\Phi\\approx-0.618$ is taken as well 3 times ($\\Phi$ is the golden ratio).</p></li>\n<li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\\frac12+i\\frac{\\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of \"conjugation in $\\mathbb{C}$\"): $-\\frac12-i\\frac{\\sqrt{3}}{2}$ is taken 3 times.</p></li>\n</ul>\n\n<p>Let us prove this \"conjugation\" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\\alpha,\\beta,\\gamma$:</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\alpha \\ \\beta \\ \\gamma}: \\ \\ E(\\alpha,\\beta,\\gamma)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma,\\alpha)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma \\ \\alpha \\ \\beta}: \\ \\ E(\\gamma,\\alpha,\\beta)=E(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\alpha \\ \\gamma}: \\ \\ E(\\beta,\\alpha,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\beta \\ \\gamma \\ \\alpha}: \\ \\ E(\\beta,\\gamma ,\\alpha)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>$$\\binom{\\alpha \\ \\beta \\ \\gamma}{\\gamma\\ \\beta \\ \\alpha}: \\ \\ E(\\alpha,\\beta,\\gamma)=F(\\alpha,\\beta,\\gamma)$$</p>\n\n<p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\\{E,F\\}$ which is invariant under the action of $S_3$.</p>\n\n<p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p>\n\n<p><strong>Addendum:</strong> </p>\n\n<p>This is a little window on the origin of the so-called \"Galois theory\".</p>\n\n<p>Have a look at the very pedagogical presentation <a href=\"http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf\" rel=\"nofollow noreferrer\">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href=\"https://en.wikipedia.org/wiki/Quartic_function\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href=\"https://math.stackexchange.com/q/49141\">Galois theory: splitting field of cubic as a vector space</a>). </p>\n" }, { "answer_id": 2114111, "score": 9, "text": "<p>No. $a,b,c,d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric polynomials</a> in $\\alpha,\\beta,\\gamma$ and thus invariant under every permutation of $(\\alpha,\\beta,\\gamma)$.\nThe expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p>\n\n<p>Their sum is symmetric however, as @DougM's answer indicates.</p>\n\n<p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\\{u,v\\}$ is.</p>\n" }, { "answer_id": 2114092, "score": 6, "text": "<p>$\\frac {b}{a} = -\\alpha - \\beta - \\gamma\\\\\n\\frac {c}{a} = \\alpha \\beta + \\beta\\gamma + \\gamma\\alpha\\\\\n\\frac {d}{a} = -\\alpha \\beta\\gamma\\\\\n-\\frac {b}{a}\\frac {c}{a} + 3\\frac {d}{a} = (\\alpha^2 \\beta + \\beta^2\\gamma + \\gamma^2\\alpha)+(\\alpha \\beta^2 + \\beta\\gamma^2 + \\gamma\\alpha^2)$</p>\n\n<p>I don't know if I can do much more that that.</p>\n" }, { "answer_id": 2114108, "score": 6, "text": "<p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href=\"https://en.wikipedia.org/wiki/Symmetric_polynomial\">symmetric functions</a> of the roots $\\alpha$, $\\beta$ and $\\gamma$. Your polynomials $\\alpha^2 \\beta+\\beta^2 \\gamma+ \\gamma^2\\alpha$ and $\\alpha \\beta^2+\\beta \\gamma^2+\\gamma\\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>\n" } ]
Question: About the roots of cubic polynomial Answer 1: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 2: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 3: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 4: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 5: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 6: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 7: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 8: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 9: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 10: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 11: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 12: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 13: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 14: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 15: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 16: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 17: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 18: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 19: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 20: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 21: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 22: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 23: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 24: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 25: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 26: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 27: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 28: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 29: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 30: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 31: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 32: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 33: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 34: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 35: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 36: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 37: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 38: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 39: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 40: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 41: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 42: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 43: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 44: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 45: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 46: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 47: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 48: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 49: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 50: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 51: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 52: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 53: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 54: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 55: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 56: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 57: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 58: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 59: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 60: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 61: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 62: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 63: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 64: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 65: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 66: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 67: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 68: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 69: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 70: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 71: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 72: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 73: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 74: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 75: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 76: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 77: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 78: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 79: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 80: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p> Answer 81: <p>Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p &amp; := u+v = &amp; e_1e_2-3e_3 \\ q &amp; := uv = &amp; e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.</p> Answer 82: <p>Let </p> <p>$$\cases{E(\alpha,\beta,\gamma):=\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha\\F(\alpha,\beta,\gamma):= \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2}$$</p> <p>Your problem, strictly speaking, has no solution (but we will see hereafter that we can somewhat "bypass" this issue):</p> <p>Here is a counterexample: Consider $P_3(x)=x(x^2-1)$ with roots $\{0,1,-1\}$.</p> <p>Expression $E$ can take two different values depending on the order in which the roots are taken : $$E(\alpha,\beta,\gamma)=\begin{cases}-1 \ &amp; \text{if} \ \ \alpha=1,\beta=-1,\gamma=0\\ \ \ \ 1 \ &amp; \text{if} \ \ \alpha=-1,\beta=1,\gamma=0\end{cases}.$$</p> <p>As said in the other answers, the fundamental reason is that neither $E$ nor $F$ is symmetrical in $\alpha, \beta, \gamma$, thus not invariant for certain permutations of the roots. </p> <p>Of course, if you take $E+F$, you get a symmetrical polynomial in $\alpha, \beta, \gamma.$ </p> <p>But a slightly deeper analysis will be rewarding. In fact, <strong>expressions $E$ and $F$ are conjugate</strong> in a sense that will become clear, once we have seen the two following examples:</p> <ul> <li><p>taking $P_3(x)=x^3+x^2-x$, value $E(\alpha,\beta,\gamma)=\Phi\approx1.618$ is taken 3 times, and value $1-\Phi\approx-0.618$ is taken as well 3 times ($\Phi$ is the golden ratio).</p></li> <li><p>taking $P_3(x)=x^3+x^2+x$, (with some complex roots), value $-\frac12+i\frac{\sqrt{3}}{2}$ is taken 3 times, and conjugate value (with the meaning of "conjugation in $\mathbb{C}$"): $-\frac12-i\frac{\sqrt{3}}{2}$ is taken 3 times.</p></li> </ul> <p>Let us prove this "conjugation" by inspecting the set $S_3$ of $3!=6$ permutations of the roots $\alpha,\beta,\gamma$:</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\alpha \ \beta \ \gamma}: \ \ E(\alpha,\beta,\gamma)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma,\alpha)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma \ \alpha \ \beta}: \ \ E(\gamma,\alpha,\beta)=E(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \alpha \ \gamma}: \ \ E(\beta,\alpha,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\beta \ \gamma \ \alpha}: \ \ E(\beta,\gamma ,\alpha)=F(\alpha,\beta,\gamma)$$</p> <p>$$\binom{\alpha \ \beta \ \gamma}{\gamma\ \beta \ \alpha}: \ \ E(\alpha,\beta,\gamma)=F(\alpha,\beta,\gamma)$$</p> <p>We have thus proved that, once a permutation is done on the roots, the value taken by $E$ is either unchanged (3 cases) or is the value of $F$ (the 3 other cases). As said by @CCorn in his answer, it is the mathematical object $\{E,F\}$ which is invariant under the action of $S_3$.</p> <p>Remark: The first 3 cases correspond to a circular permutation: it is evident that $E$ has been built so that it is invariant by this type of permutation.</p> <p><strong>Addendum:</strong> </p> <p>This is a little window on the origin of the so-called "Galois theory".</p> <p>Have a look at the very pedagogical presentation <a href="http://www-math.ucdenver.edu/~wcherowi/courses/m4010/polynom3.pdf" rel="nofollow noreferrer">here</a> where it is explained how the ideas of Lagrange for solving the quartic equation where based on polynomials (called resolvents) that, under permutation, could take only a very small number of values. See also (<a href="https://en.wikipedia.org/wiki/Quartic_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quartic_function</a>). Another reference (<a href="https://math.stackexchange.com/q/49141">Galois theory: splitting field of cubic as a vector space</a>). </p> Answer 83: <p>No. $a,b,c,d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric polynomials</a> in $\alpha,\beta,\gamma$ and thus invariant under every permutation of $(\alpha,\beta,\gamma)$. The expressions you want are invariant under some permutations, but not all, and therefore cannot be expressed in terms of only $a,b,c,d$.</p> <p>Their sum is symmetric however, as @DougM's answer indicates.</p> <p><strong>Update:</strong> Their product too, and this results in a quadratic equation for the two expressions, as @ReinhardMeier's answer shows. Thus, the individual values of your desired expressions (call them $u$ and $v$) are not determined uniquely from $a,b,c,d$, but the set $\{u,v\}$ is.</p> Answer 84: <p>$\frac {b}{a} = -\alpha - \beta - \gamma\\ \frac {c}{a} = \alpha \beta + \beta\gamma + \gamma\alpha\\ \frac {d}{a} = -\alpha \beta\gamma\\ -\frac {b}{a}\frac {c}{a} + 3\frac {d}{a} = (\alpha^2 \beta + \beta^2\gamma + \gamma^2\alpha)+(\alpha \beta^2 + \beta\gamma^2 + \gamma\alpha^2)$</p> <p>I don't know if I can do much more that that.</p> Answer 85: <p>There can be no such formula. The coefficients $a$, $b$, $c$ and $d$ are <a href="https://en.wikipedia.org/wiki/Symmetric_polynomial">symmetric functions</a> of the roots $\alpha$, $\beta$ and $\gamma$. Your polynomials $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha$ and $\alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ are not symmetric functions, so they cannot be expressed as functions of the coefficients (since any such function would be symmetric).</p>
A known closed form for Borchardt mean (generalization of AGM) - why doesn&#39;t it work?
2,117,142
9
https://math.stackexchange.com/questions/2117142/a-known-closed-form-for-borchardt-mean-generalization-of-agm-why-doesnt-it
<p>There is a curious four parameter iteration introduced by Borchardt:</p> <p>$$a_{n+1}=\frac{a_n+b_n+c_n+d_n}{4} \\ b_{n+1}=\frac{\sqrt{a_n b_n}+\sqrt{c_n d_n}}{2} \\ c_{n+1}=\frac{\sqrt{a_n c_n}+\sqrt{b_n d_n}}{2} \\ d_{n+1}=\frac{\sqrt{a_n d_n}+\sqrt{b_n c_n}}{2} $$</p> <p>Apparently, the limit of this iteration, denoted hereafter $B(a_0,b_0,c_0,d_0)$ has a closed form in terms of a certain double integral. However, this 'closed form' doesn't check out when I try implementing it in Mathematica.</p> <p>For a complete description see <a href="https://www.math.washington.edu/~morrow/papers/nate-thesis.pdf" rel="noreferrer">this paper</a>, section 2.5. For more general information see <a href="http://link.springer.com/article/10.1007/s11139-011-9353-7" rel="noreferrer">this paper</a>.</p> <p>Here is the description of the closed form from the first link above. First, define:</p> <p>$$A = a_0 + b_0 + c_0 + d_0 \\ B = a_0 + b_0 - c_0 - d_0 \\ C = a_0 - b_0 + c_0 - d_0 \\ D = a_0 - b_0 - c_0 + d_0$$</p> <p>(Note the problem - some of the numbers above can be negative).</p> <p>Then define:</p> <p>$$B_1=\frac{\sqrt{A B}+\sqrt{C D}}{2} \\ B_2=\frac{\sqrt{A B}-\sqrt{C D}}{2} \\ C_1=\frac{\sqrt{A C}+\sqrt{B D}}{2} \\ C_2=\frac{\sqrt{A C}-\sqrt{B D}}{2} \\ D_1=\frac{\sqrt{A D}+\sqrt{B C}}{2} \\ D_2=\frac{\sqrt{A D}-\sqrt{B C}}{2} $$</p> <p>(Note the problem - some of the numbers above can be complex).</p> <p>Then define:</p> <p>$$\Delta=\sqrt[4]{ABCDB_1C_1D_1B_2C_2D_2}$$</p> <p>And finally, the main parameters:</p> <p>$$\alpha_0=\frac{A C B_1}{\Delta} \\ \alpha_1=\frac{C C_1 D_1}{\Delta} \\ \alpha_2=\frac{A C_2 D_1}{\Delta} \\ \alpha_3=\frac{B_1 C_1 C_2}{\Delta}$$</p> <p>And define a function:</p> <p>$$R(x):=x(x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)$$</p> <p>According to the paper, the limit of the iteration above is equal to:</p> <p>$$\frac{\pi^2}{B(a_0,b_0,c_0,d_0)}=\int _0^{\alpha_3}\int _{\alpha_2}^{\alpha_1}\frac{x-y}{\sqrt{R(x) R(y)}}dxdy \tag{1}$$</p> <p>I tried to implement this 'closed form' in Mathematica. However, for most initial conditions, even simple ones, Mathematica has trouble computing the integral and gives complex values anyway.</p> <blockquote> <p>How can I tell if $(1)$ is correct? Is there a typo somewhere in my formulas? Or maybe there is some typo in the linked paper?</p> </blockquote> <p>I want to know if $(1)$ is correct, and if not what is the correct form. I don't need the full proof. I tried to find Borchardt's original paper, but I couldn't. And it's not in English anyway.</p>
[ { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" }, { "answer_id": 2655876, "score": 2, "text": "<p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is.\nIn my opinion the problems are mainly due to typos that are carried over.\nMy answer is based on two articles.\nOne is the original article of Borchardt, where the link was provided in Ewan's answer.</p>\n\n<p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p>\n\n<p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \\leftarrow \\left(\\frac{a+3b}{4},\\frac{\\sqrt{ab}+b}{2}\\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href=\"https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf\" rel=\"nofollow noreferrer\">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p>\n\n<p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically.\nThis finally led to a solution that I think is numerically satisfying (but see below).</p>\n\n<p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p>\n\n<p>The first important difference are the conditions on $a,b,c,e$.\nIn [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$.\nI will call this condition $(C)$.\nOn the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p>\n\n<p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems.\nHowever, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p>\n\n<p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and\n\\begin{align}\na_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\\\\nb_{n+1} &amp;= (\\sqrt{a_nb_n}+\\sqrt{c_ne_n})/2,\\\\\nc_{n+1} &amp;= (\\sqrt{a_nc_n}+\\sqrt{b_ne_n})/2,\\\\\ne_{n+1} &amp;= (\\sqrt{a_ne_n}+\\sqrt{b_nc_n})/2,\n\\end{align}\nand I set $g = \\lim_{n \\to \\infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p>\n\n<p>From [1], page 617, I also take\n\\begin{align}\nA &amp;= a+b+c+e,\\\\\nB &amp;= a+b-c-e,\\\\\nC &amp;= a-b+c-e,\\\\\nE &amp;= a-b-c+e,\n\\end{align}\nand\n\\begin{align}\nB_1 &amp;= (\\sqrt{AB}+\\sqrt{CE})/2,\\\\\nB_2 &amp;= (\\sqrt{AB}-\\sqrt{CE})/2,\\\\\nC_1 &amp;= (\\sqrt{AC}+\\sqrt{BE})/2,\\\\\nC_2 &amp;= (\\sqrt{AC}-\\sqrt{BE})/2,\\\\\nE_1 &amp;= (\\sqrt{AE}+\\sqrt{BC})/2,\\\\\nE_2 &amp;= (\\sqrt{AE}-\\sqrt{BC})/2,\n\\end{align}\nwhere it is important to notice that [1] defines $E_1 = (\\sqrt{AE}-\\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$.\nSo far we are in agreement with the definitions in the question and in [2], but considering the values of $\\alpha_i$, $i=0,1,2,3$ there are differences.\nFrom [1], page 618, I take\n\\begin{align}\n\\Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4},\n\\end{align}\nwhich is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]).\nThe values of $\\alpha_i$ also differ between [1] and [2].\nFrom [1], page 618, I take\n\\begin{align}\n\\alpha_0 &amp;= \\frac{acB_1}{\\Delta},\\\\\n\\alpha_1 &amp;= \\frac{c C_1 E_1}{\\Delta},\\\\\n\\alpha_2 &amp;= \\frac{a C_2 E_1}{\\Delta},\\\\\n\\alpha_3 &amp;= \\frac{B_1 C_1 C_2}{\\Delta},\n\\end{align}\nwhere again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p>\n\n<p>All sources however agree on the definition of $R(x)$ as\n\\begin{align}\nR(x) = x (x-\\alpha_0)(x-\\alpha_1)(x-\\alpha_2)(x-\\alpha_3).\n\\end{align}</p>\n\n<p>Now [1], page 618, states that \n\\begin{align}\n4 \\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere it is important to notice the factor $4$ that is missing in [2] and in the question.</p>\n\n<p>My numerical experiments show that neither [1] nor [2] is correct.\nIn fact I think the following statement should hold:</p>\n\n<p><strong>Theorem:</strong>\nFor $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have\n\\begin{align}\n\\frac{\\pi^2}{g} = \\int_{0}^{\\alpha_3}\\int_{\\alpha_2}^{\\alpha_1} \\frac{x-y}{\\sqrt{R(x)R(y)}} dx dy,\n\\end{align}\nwhere $g = \\lim_{n \\to \\infty} a_n$ and $\\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p>\n\n<p>It is important to notice that the statement</p>\n\n<ol>\n<li>uses condition $(C^*)$ from [2], page 324,</li>\n<li>uses the definition from [1] regarding $\\Delta$ and $\\alpha_i$, $i=0,1,2,3$,</li>\n<li>and does not include the factor $4$ which is present in [1] but not in [2].</li>\n</ol>\n\n<p>In summary, it's a wild mix of [1] and [2]...</p>\n\n<p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high.\nYou can double check this with your favorite numerical software tool, I have added my R implementation below.\nFinally, I have to say that the values don't match perfectly.\nThe numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion.\nAs long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p>\n\n<hr>\n\n<h2>R Implementation</h2>\n\n<pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition\na &lt;- 7\nb &lt;- 4\nc &lt;- 3\ne &lt;- 2\n\n#get some random starting values that are OK\ngoodStart &lt;- FALSE \nwhile (goodStart==FALSE){\n\n X &lt;- sort(rexp(4,rate=20))\n\n #Borwein &amp; Borwein page 324 Theorem 3 condition\n if ( X[4]*X[1] - X[2]*X[3] &gt; 0){\n a &lt;- X[4]\n b &lt;- X[3]\n c &lt;- X[2]\n e &lt;- X[1]\n goodStart = TRUE\n }\n\n}\n\n#Borwein &amp; Borwein page 324 Theorem 3 condition\nif ( a*e - b*c &lt;= 0){stop(\"Borwein &amp; Borwein coefficient condition not ok\")}\n\n#Borchardt condition - seems to be wrong\n#if ( a - b - c + e &lt;= 0){stop(\"coefficient condition not ok\")}\n\na_i &lt;- a\nb_i &lt;- b\nc_i &lt;- c\ne_i &lt;- e\n\nfor (i in 1:1000) {\n\n tmp_a &lt;- (a_i+b_i+c_i+e_i)/4\n tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2\n tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2\n tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2\n\n a_i &lt;- tmp_a\n b_i &lt;- tmp_b\n c_i &lt;- tmp_c\n e_i &lt;- tmp_e\n\n}\n\n#print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i))\n\n#just pick any of the four\ng &lt;- a_i\n\n#compute close form solution\nI1 &lt;- pi^2/g\nprint(\"sequence\")\nprint(I1)\n\n#page 617 bottom\nA &lt;- a + b + c + e\nB &lt;- a + b - c - e\nC &lt;- a - b + c - e\nE &lt;- a - b - c + e\n\nB1 &lt;- (sqrt(A*B)+sqrt(C*E))/2\nB2 &lt;- (sqrt(A*B)-sqrt(C*E))/2\nC1 &lt;- (sqrt(A*C)+sqrt(B*E))/2\nC2 &lt;- (sqrt(A*C)-sqrt(B*E))/2\nE1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original\nE2 &lt;- (sqrt(A*E)-sqrt(B*C))/2\n\nDelta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4)\nalpha0 &lt;- a*c*B1/Delta\nalpha1 &lt;- c*C1*E1/Delta\nalpha2 &lt;- a*C2*E1/Delta\nalpha3 &lt;- B1*C1*C2/Delta\n\nintegrand &lt;- function(x,y,param){\n\n alpha0 &lt;- param$alpha0\n alpha1 &lt;- param$alpha1\n alpha2 &lt;- param$alpha2\n alpha3 &lt;- param$alpha3\n\n Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3)\n Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3)\n\n z &lt;- (x-y) / sqrt(Rx*Ry)\n\n return(z)\n\n}\n\np &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3)\n\nlibrary(statmod)\n\ngauss &lt;- gauss.quad(500,kind=\"legendre\")\nw &lt;- gauss$weights\nx &lt;- gauss$nodes\n\nupper_x &lt;- alpha1\nlower_x &lt;- alpha2\nupper_y &lt;- alpha3\nlower_y &lt;- 0\n\nI &lt;- 0\nfor (i in 1:length(x)) {\n for (j in 1:length(x)) {\n\n xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2\n yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2\n ww &lt;- w[i]*w[j]\n\n I &lt;- I + ww*integrand(xx,yy,p)\n\n }\n\n}\n\nI &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4\n\nprint(\"numerical integral:\")\nprint(I)\n</code></pre>\n" }, { "answer_id": 2654571, "score": 1, "text": "<p>Here are just a few remarks.</p>\n\n<p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href=\"http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p>\n\n<p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href=\"http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93\" rel=\"nofollow noreferrer\">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p>\n\n<p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity \n$$\na_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon' c_{n+1}+\\varepsilon \\varepsilon'd_{n+1}=\\frac{(\\sqrt{a_n}+\\varepsilon \\sqrt{b_n}+ \\varepsilon'\\sqrt{c_n}+\\varepsilon \\varepsilon'\\sqrt{d_n})^2}{4} \\tag{1}\n$$</p>\n\n<p>for any two signs $\\varepsilon,\\varepsilon'$ we have for $n\\geq 1$,\n$$a_{n+1}+\\varepsilon b_{n+1}+ \\varepsilon ' c_{n+1}+\\varepsilon \\varepsilon 'd_{n+1} \\geq 0 \\tag{2}$$ </p>\n\n<p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\\alpha$ constants, you will get only real and positive numbers.</p>\n" } ]
Question: A known closed form for Borchardt mean (generalization of AGM) - why doesn&#39;t it work? Answer 1: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 2: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 3: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 4: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 5: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 6: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 7: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 8: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 9: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 10: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 11: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 12: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 13: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 14: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 15: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 16: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 17: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 18: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 19: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 20: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 21: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 22: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 23: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 24: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 25: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 26: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 27: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 28: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 29: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 30: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 31: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 32: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p> Answer 33: <p>Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer.</p> <p>[1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (<a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=646</a>)</p> <p>[2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (<a href="https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf" rel="nofollow noreferrer">https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf</a>)</p> <p>I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below).</p> <p>Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences.</p> <p>The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the <em>real</em> numbers $a,b,c,e$ have to be such that (i) $a &gt; b &gt; c &gt; e &gt; 0$ and (ii) $a - b - c + e &gt; 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$, and I'll call this condition $(C^*)$.</p> <p>The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below).</p> <p>From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &amp;= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &amp;= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &amp;= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &amp;= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$).</p> <p>From [1], page 617, I also take \begin{align} A &amp;= a+b+c+e,\\ B &amp;= a+b-c-e,\\ C &amp;= a-b+c-e,\\ E &amp;= a-b-c+e, \end{align} and \begin{align} B_1 &amp;= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &amp;= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &amp;= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &amp;= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &amp;= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &amp;= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &amp;= \frac{acB_1}{\Delta},\\ \alpha_1 &amp;= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &amp;= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &amp;= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325.</p> <p>All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align}</p> <p>Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question.</p> <p>My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold:</p> <p><strong>Theorem:</strong> For $a &gt; b &gt; c &gt; e &gt; 0$ such that $ae - bc &gt; 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above.</p> <p>It is important to notice that the statement</p> <ol> <li>uses condition $(C^*)$ from [2], page 324,</li> <li>uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$,</li> <li>and does not include the factor $4$ which is present in [1] but not in [2].</li> </ol> <p>In summary, it's a wild mix of [1] and [2]...</p> <p>I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used).</p> <hr> <h2>R Implementation</h2> <pre><code>#non-random starting values that work with Borwein &amp; Borwein page 324 Theorem 3 condition a &lt;- 7 b &lt;- 4 c &lt;- 3 e &lt;- 2 #get some random starting values that are OK goodStart &lt;- FALSE while (goodStart==FALSE){ X &lt;- sort(rexp(4,rate=20)) #Borwein &amp; Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] &gt; 0){ a &lt;- X[4] b &lt;- X[3] c &lt;- X[2] e &lt;- X[1] goodStart = TRUE } } #Borwein &amp; Borwein page 324 Theorem 3 condition if ( a*e - b*c &lt;= 0){stop("Borwein &amp; Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e &lt;= 0){stop("coefficient condition not ok")} a_i &lt;- a b_i &lt;- b c_i &lt;- c e_i &lt;- e for (i in 1:1000) { tmp_a &lt;- (a_i+b_i+c_i+e_i)/4 tmp_b &lt;- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c &lt;- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e &lt;- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i &lt;- tmp_a b_i &lt;- tmp_b c_i &lt;- tmp_c e_i &lt;- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g &lt;- a_i #compute close form solution I1 &lt;- pi^2/g print("sequence") print(I1) #page 617 bottom A &lt;- a + b + c + e B &lt;- a + b - c - e C &lt;- a - b + c - e E &lt;- a - b - c + e B1 &lt;- (sqrt(A*B)+sqrt(C*E))/2 B2 &lt;- (sqrt(A*B)-sqrt(C*E))/2 C1 &lt;- (sqrt(A*C)+sqrt(B*E))/2 C2 &lt;- (sqrt(A*C)-sqrt(B*E))/2 E1 &lt;- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 &lt;- (sqrt(A*E)-sqrt(B*C))/2 Delta &lt;- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 &lt;- a*c*B1/Delta alpha1 &lt;- c*C1*E1/Delta alpha2 &lt;- a*C2*E1/Delta alpha3 &lt;- B1*C1*C2/Delta integrand &lt;- function(x,y,param){ alpha0 &lt;- param$alpha0 alpha1 &lt;- param$alpha1 alpha2 &lt;- param$alpha2 alpha3 &lt;- param$alpha3 Rx &lt;- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry &lt;- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z &lt;- (x-y) / sqrt(Rx*Ry) return(z) } p &lt;- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss &lt;- gauss.quad(500,kind="legendre") w &lt;- gauss$weights x &lt;- gauss$nodes upper_x &lt;- alpha1 lower_x &lt;- alpha2 upper_y &lt;- alpha3 lower_y &lt;- 0 I &lt;- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx &lt;- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy &lt;- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww &lt;- w[i]*w[j] I &lt;- I + ww*integrand(xx,yy,p) } } I &lt;- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I) </code></pre> Answer 34: <p>Here are just a few remarks.</p> <p>Borchart's original article (referred to as Berl. Monatsber. , pages 611–621) is fully available (in German) at <a href="http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&amp;seite:int=647</a>.</p> <p>However this article itself uses work already done in a longer older article (where most of the hard compatations are done), (referred to as (Mem. Acad. Berlin, 1878, pp. 33-96), and also fully available online at <a href="http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93" rel="nofollow noreferrer">http://bibliothek.bbaw.de/bbaw/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=07-abh/1878&amp;seite:int=93</a>.</p> <p>Your square-root of negative number problem can be solved very simply by starting from $1$ (or any nonzero index) instead of $0$. Because of Borchardt's identity $$ a_{n+1}+\varepsilon b_{n+1}+ \varepsilon' c_{n+1}+\varepsilon \varepsilon'd_{n+1}=\frac{(\sqrt{a_n}+\varepsilon \sqrt{b_n}+ \varepsilon'\sqrt{c_n}+\varepsilon \varepsilon'\sqrt{d_n})^2}{4} \tag{1} $$</p> <p>for any two signs $\varepsilon,\varepsilon'$ we have for $n\geq 1$, $$a_{n+1}+\varepsilon b_{n+1}+ \varepsilon ' c_{n+1}+\varepsilon \varepsilon 'd_{n+1} \geq 0 \tag{2}$$ </p> <p>So if you use any $a_1,b_1,c_1,d_1$ instead of $a_0,b_0,c_0,d_0$ to define the $B,C,\alpha$ constants, you will get only real and positive numbers.</p>
Convergence of harmonic series multiplied by partial sums of harmonic series
2,079,099
8
https://math.stackexchange.com/questions/2079099/convergence-of-harmonic-series-multiplied-by-partial-sums-of-harmonic-series
<p>How can I show if the following series is convergent or divergent $$\sum_{n=1}^\infty \frac1n \left ( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) = \sum_{n = 1}^\infty\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}$$ If it were a sequence I could have easily used Cauchy first theorem. </p>
[ { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" }, { "answer_id": 2079115, "score": 23, "text": "<p>Comparison:\n$$\n\\sum_{n=1}^\\infty \\frac1n \\left( 1 + \\frac12 + \\frac13 + \\cdots + \\frac1n \\right) \\ge \\sum_{n=1}^\\infty \\frac 1 n = \\infty.\n$$</p>\n" }, { "answer_id": 2079109, "score": 6, "text": "<p>The series diverges. You have that $$1+1/2+\\cdots \\geq 1$$ and thus you only have to look at $$\\sum_{n=1}^\\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p>\n" }, { "answer_id": 2079153, "score": 6, "text": "<p>Admitting that you can use harmonic number, let us consider $$S_p= \\sum_{n = 1}^p\\frac{1}{n}\\sum_{k = 1}^n\\frac{1}{k}=\\sum_{n = 1}^p\\frac{H_n}{n}=\\frac{\\left(H_p\\right){}^2}{2}-\\frac{H_p^{(2)}}{2}-\\psi ^{(1)}(p+1)+\\frac{\\pi ^2}{6}$$ Considering the asymptotics $$H_p=\\gamma +\\log \\left(p\\right)+\\frac{1}{2 p}-\\frac{1}{12\n p^2}+O\\left(\\frac{1}{p^4}\\right)$$ $$H_p^{(2)}=\\frac{\\pi ^2}{6}-\\frac{1}{p}+\\frac{1}{2 p^2}-\\frac{1}{6\n p^3}+O\\left(\\frac{1}{p^4}\\right)$$ $$\\psi ^{(1)}(p+1)=\\frac{1}{p}-\\frac{1}{2 p^2}+\\frac{1}{6 p^3}+O\\left(\\frac{1}{p^4}\\right)$$ you should end with $$S_p=\\frac{\\pi ^2}{12}+\\frac{\\gamma ^2}{2}+\\gamma \\log\n \\left(p\\right)+\\frac{1}{2} \\log ^2\\left(p\\right)+\\frac{\\log \\left({p}\\right)+\\gamma -1}{2 p}+O\\left(\\frac{1}{p^2}\\right)$$ which shows the result.</p>\n\n<p>Computing exactly for $p=10$, we have $$S_{10}=\\frac{32160403}{6350400}\\approx 5.06431$$ while the above expression gives $ \\approx 5.06308$.</p>\n" }, { "answer_id": 2079105, "score": 2, "text": "<p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p>\n\n<p>This is the mean of $S_n = \\{1, \\frac{1}{2}, \\dots, \\frac{1}{n}\\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\\frac{1}{n+1}$).</p>\n" } ]
Question: Convergence of harmonic series multiplied by partial sums of harmonic series Answer 1: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 2: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 3: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 4: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 5: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 6: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 7: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 8: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 9: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 10: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 11: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 12: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 13: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 14: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 15: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 16: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 17: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 18: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 19: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 20: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 21: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 22: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 23: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 24: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 25: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 26: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 27: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 28: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 29: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 30: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 31: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 32: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 33: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 34: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 35: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 36: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 37: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 38: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 39: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 40: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 41: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 42: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 43: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 44: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 45: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 46: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 47: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 48: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 49: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 50: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 51: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 52: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 53: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 54: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 55: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 56: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 57: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 58: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 59: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 60: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 61: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 62: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 63: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 64: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p> Answer 65: <p>Comparison: $$ \sum_{n=1}^\infty \frac1n \left( 1 + \frac12 + \frac13 + \cdots + \frac1n \right) \ge \sum_{n=1}^\infty \frac 1 n = \infty. $$</p> Answer 66: <p>The series diverges. You have that $$1+1/2+\cdots \geq 1$$ and thus you only have to look at $$\sum_{n=1}^\infty n^{-1}$$ which is a lower bound to your series. The latter series is the harmonic series that is well-known to diverge.</p> Answer 67: <p>Admitting that you can use harmonic number, let us consider $$S_p= \sum_{n = 1}^p\frac{1}{n}\sum_{k = 1}^n\frac{1}{k}=\sum_{n = 1}^p\frac{H_n}{n}=\frac{\left(H_p\right){}^2}{2}-\frac{H_p^{(2)}}{2}-\psi ^{(1)}(p+1)+\frac{\pi ^2}{6}$$ Considering the asymptotics $$H_p=\gamma +\log \left(p\right)+\frac{1}{2 p}-\frac{1}{12 p^2}+O\left(\frac{1}{p^4}\right)$$ $$H_p^{(2)}=\frac{\pi ^2}{6}-\frac{1}{p}+\frac{1}{2 p^2}-\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ $$\psi ^{(1)}(p+1)=\frac{1}{p}-\frac{1}{2 p^2}+\frac{1}{6 p^3}+O\left(\frac{1}{p^4}\right)$$ you should end with $$S_p=\frac{\pi ^2}{12}+\frac{\gamma ^2}{2}+\gamma \log \left(p\right)+\frac{1}{2} \log ^2\left(p\right)+\frac{\log \left({p}\right)+\gamma -1}{2 p}+O\left(\frac{1}{p^2}\right)$$ which shows the result.</p> <p>Computing exactly for $p=10$, we have $$S_{10}=\frac{32160403}{6350400}\approx 5.06431$$ while the above expression gives $ \approx 5.06308$.</p> Answer 68: <p>EDIT: I answered the question as it was originally stated, but I think you intended to ask a different question from the one you actually asked.</p> <p>This is the mean of $S_n = \{1, \frac{1}{2}, \dots, \frac{1}{n}\}$. That's clearly bounded by $0$ and $1$, and also is clearly decreasing because to move from $S_n$ to $S_{n+1}$ we don't remove anything from $S_n$ but we add a smaller thing (namely $\frac{1}{n+1}$).</p>
Integrate $\int x e^{x} \sin x dx$
2,080,664
8
https://math.stackexchange.com/questions/2080664/integrate-int-x-ex-sin-x-dx
<blockquote> <p>Evaluate:</p> <p><span class="math-container">$$\int x e^{x} \sin x dx$$</span></p> </blockquote> <p>Have you ever come across such an integral? I have no idea how to start with the calculation.</p>
[ { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" }, { "answer_id": 2080678, "score": 10, "text": "<p>If one may recall that $\\sin(x)=\\Im e^{ix}$, then</p>\n\n<p>$$\\int x\\sin(x)e^x\\ dx=\\Im\\int xe^{(1+i)x}\\ dx$$</p>\n\n<p>With a quick integration by parts, we have</p>\n\n<p>$$=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{1+i}\\int e^{(1+i)x}\\ dx\\right)\\\\=\\Im\\left(\\frac1{1+i}xe^{(1+i)x}-\\frac1{(1+i)^2}e^{(1+i)x}+C\\right)\\\\=\\frac12\\left(x\\sin(x)e^x-x\\cos(x)e^x+\\cos(x)e^x\\right)$$</p>\n" }, { "answer_id": 2080675, "score": 9, "text": "<p>Solution without complex numbers:</p>\n\n<p>Let $I=\\int e^x x\\sin xdx$ Integrating by parts:</p>\n\n<p>$$I=e^x x\\sin x-\\int e^x x\\cos xdx-\\int e^x\\sin xdx$$</p>\n\n<p>Then one more time by parts:</p>\n\n<p>$$\\int e^x x\\cos xdx=e^x x\\cos x + I-\\int e^x\\cos xdx$$</p>\n\n<p>So:</p>\n\n<p>$$2I=e^x x\\sin x-e^x x\\cos x+\\int e^x(\\cos x-\\sin x)dx$$</p>\n\n<p>Now (by parts again or by direct observation):</p>\n\n<p>$$\\int e^x(\\cos x-\\sin x)dx=e^x \\cos x$$</p>\n\n<p>So:</p>\n\n<p>$$I=\\frac{e^x x\\sin x-e^x x\\cos x+e^x \\cos x}{2}$$</p>\n" }, { "answer_id": 2080679, "score": 4, "text": "<p><strong>HINTS:</strong> </p>\n\n<p>Use Euler's Formula to write $$x\\sin(x)e^x=\\text{Im}(xe^{(1+i)x})$$Integrate $\\int xe^{(1+i)x}\\,dx$ by parts with $u=x$ and $v=\\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p>\n" }, { "answer_id": 2080926, "score": 3, "text": "<p>This solution doesn't use integration by parts. We start with</p>\n\n<p>$$\\int\\exp(x) dx = \\exp(x)$$</p>\n\n<p>Substituting $x = \\lambda t$ yields:</p>\n\n<p>$$\\int\\exp(\\lambda t) dt = \\frac{\\exp(\\lambda t)}{\\lambda}$$</p>\n\n<p>Substitute $\\lambda = 1+\\epsilon + i$ and expand both sides to first order in $\\epsilon$. Equating the coefficient of $\\epsilon$ of both sides yields:</p>\n\n<p>$$\\begin{split}\\int t\\exp(t)\\exp(i t) dt &amp;= \\exp(t)\\exp(it)\\left[\\frac{1-i}{2}t + \\frac{i}{2}\\right]\\\\\n&amp; = \\exp(t)\\left[\\frac{\\exp(i(t-\\frac{\\pi}{4}))}{\\sqrt{2}}t + \\frac{\\exp(i(t+\\frac{\\pi}{2}))}{2}\\right]\n\\end{split}\n$$</p>\n\n<p>Finally, take the imaginary part of both sides:</p>\n\n<p>$$\\int t\\exp(t)\\sin(t) dt = \\exp(t)\\left[\\frac{\\sin\\left(t-\\frac{\\pi}{4}\\right)}{\\sqrt{2}}t + \\frac{\\cos(t)}{2}\\right]$$</p>\n" }, { "answer_id": 3069117, "score": 3, "text": "<p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p>\n\n<p>We will use the results of following(easy - just substitute):</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\sin (x) dx = \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n∫ e^{\\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C\n$$</span>\nand</p>\n\n<p><span class=\"math-container\">$$\n∫ e^x \\cos (x) dx = \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} + C\n$$</span></p>\n\n<p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class=\"math-container\">$x$</span>, <span class=\"math-container\">$e^x$</span> and <span class=\"math-container\">$\\sin (x)$</span>). However, using basic logarithm and <span class=\"math-container\">$\\exp $</span> properties we can transform the \\exp ression into one with just two factors:</p>\n\n<p><span class=\"math-container\">$$\nx e^x = e^{\\ln (x)} e^x = e^{\\ln (x) + x}\n$$</span></p>\n\n<p>this leaves us with</p>\n\n<p><span class=\"math-container\">\\begin{equation*}\nI =\n\\int xe^xsinx dx =\n\\int e^{ln(x) + x} \\sin (x) dx =\n\\end{equation*}</span></p>\n\n<p>integrate by parts with substituting:</p>\n\n<p><span class=\"math-container\">$$\n\\begin{equation*}\n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\sin (x) \\quad &amp; du &amp;=\\cos (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] \n\\end{equation*}\n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x) -e^x\\cos (x)dx \n$$</span></p>\n\n<p><span class=\"math-container\">$$\n= e^xx\\sin (x)-e^x\\sin (x)-∫ xe^x\\cos (x)dx + ∫ e^x\\cos (x)dx\n$$</span></p>\n\n<p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p>\n\n<p><span class=\"math-container\">$$\n∫ xe^x\\cos (x)dx = \n$$</span>\n<span class=\"math-container\">$$\n\\begin{equation*}\n= ∫ e^{\\ln (x)+x}\\cos (x)dx = \n\\left[\n \\begin{alignedat}{2}\n u &amp;= \\cos (x) \\quad &amp; du &amp;=-\\sin (x) \\\\\n dv &amp;= e^{\\ln (x)+x} \\quad &amp; v &amp;= xe^x-e^x\n \\end{alignedat}\\,\n\\right] =\n\\end{equation*}\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ ∫ x e^x \\sin (x) dx - ∫ e^x \\sin (x) dx =\n$$</span>\n<span class=\"math-container\">$$\n= xe^x\\cos (x)-e^x\\cos (x)+ I - ∫ e^x \\sin (x) dx \n$$</span></p>\n\n<p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p>\n\n<p><span class=\"math-container\">$$\nI = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x)- I + ∫ e^x \\sin (x) dx\n$$</span>\nGreat! We obtained the wanted integral with negative sign. Let's move it to the other side of equation:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + ∫ e^x\\cos (x)dx - xe^x\\cos (x)+e^x\\cos (x) + ∫ e^x \\sin (x) dx\n$$</span>\nand plug in the results of simpler integrals:\n<span class=\"math-container\">$$\n2I = e^xx\\sin (x)-e^x\\sin (x) + \\frac{e^x \\sin (x) + e^x \\cos (x)}{2} - xe^x\\cos (x)+e^x\\cos (x) + \\frac{e^x \\sin (x) - e^x \\cos (x)}{2} + C\n$$</span>\nnow just simplify the right side and divide by 2 to obtain the final result:\n<span class=\"math-container\">$$\nI = \\frac {xe^x\\sin (x) - xe^x\\cos (x) +e^x\\cos (x)}{2} + C\n$$</span></p>\n" }, { "answer_id": 2080949, "score": 1, "text": "<p><strong>By indeterminate coefficients</strong>:</p>\n\n<p>A term like $xe^x\\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\\cos x$ and $e^x\\sin x$.</p>\n\n<p>Then we are tempted to try</p>\n\n<p>$$f(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)),$$</p>\n\n<p>and</p>\n\n<p>$$f'(x)=e^x(x(A\\sin x+B\\cos x)+(C\\sin x+D\\cos x)+(A\\sin x+B\\cos x)+x(A\\cos x-B\\sin x)+(C\\cos x-D\\sin x)).$$</p>\n\n<p>We identify,</p>\n\n<p>$$A-B=1,\\\\A+B=0,\\\\C+A-D=0,\\\\D+B+C=0$$</p>\n\n<p>and obtain</p>\n\n<p>$$\\frac12xe^x(\\sin x-\\cos x)+\\frac12\\cos x.$$</p>\n" }, { "answer_id": 2080684, "score": 0, "text": "<p>First integrate $\\sin(x) \\exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>\n" } ]
Question: Integrate $\int x e^{x} \sin x dx$ Answer 1: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 2: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 3: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 4: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 5: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 6: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 7: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 8: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 9: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 10: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 11: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 12: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 13: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 14: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 15: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 16: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 17: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 18: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 19: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 20: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 21: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 22: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 23: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 24: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 25: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 26: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 27: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 28: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 29: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 30: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 31: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 32: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 33: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 34: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 35: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 36: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 37: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 38: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 39: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 40: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 41: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 42: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 43: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 44: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 45: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 46: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 47: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 48: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 49: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 50: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 51: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 52: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 53: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 54: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 55: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 56: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 57: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 58: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 59: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 60: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 61: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 62: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 63: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 64: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 65: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 66: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 67: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 68: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 69: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 70: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 71: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 72: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 73: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 74: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 75: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 76: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 77: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 78: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 79: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 80: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 81: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 82: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 83: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 84: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 85: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 86: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 87: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 88: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 89: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 90: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 91: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 92: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 93: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 94: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 95: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 96: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 97: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 98: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 99: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 100: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 101: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 102: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 103: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 104: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 105: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 106: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 107: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 108: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 109: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 110: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 111: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 112: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p> Answer 113: <p>If one may recall that $\sin(x)=\Im e^{ix}$, then</p> <p>$$\int x\sin(x)e^x\ dx=\Im\int xe^{(1+i)x}\ dx$$</p> <p>With a quick integration by parts, we have</p> <p>$$=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{1+i}\int e^{(1+i)x}\ dx\right)\\=\Im\left(\frac1{1+i}xe^{(1+i)x}-\frac1{(1+i)^2}e^{(1+i)x}+C\right)\\=\frac12\left(x\sin(x)e^x-x\cos(x)e^x+\cos(x)e^x\right)$$</p> Answer 114: <p>Solution without complex numbers:</p> <p>Let $I=\int e^x x\sin xdx$ Integrating by parts:</p> <p>$$I=e^x x\sin x-\int e^x x\cos xdx-\int e^x\sin xdx$$</p> <p>Then one more time by parts:</p> <p>$$\int e^x x\cos xdx=e^x x\cos x + I-\int e^x\cos xdx$$</p> <p>So:</p> <p>$$2I=e^x x\sin x-e^x x\cos x+\int e^x(\cos x-\sin x)dx$$</p> <p>Now (by parts again or by direct observation):</p> <p>$$\int e^x(\cos x-\sin x)dx=e^x \cos x$$</p> <p>So:</p> <p>$$I=\frac{e^x x\sin x-e^x x\cos x+e^x \cos x}{2}$$</p> Answer 115: <p><strong>HINTS:</strong> </p> <p>Use Euler's Formula to write $$x\sin(x)e^x=\text{Im}(xe^{(1+i)x})$$Integrate $\int xe^{(1+i)x}\,dx$ by parts with $u=x$ and $v=\frac{e^{(1+i)x}}{1+i}$ and finish by taking the imaginary part.</p> Answer 116: <p>This solution doesn't use integration by parts. We start with</p> <p>$$\int\exp(x) dx = \exp(x)$$</p> <p>Substituting $x = \lambda t$ yields:</p> <p>$$\int\exp(\lambda t) dt = \frac{\exp(\lambda t)}{\lambda}$$</p> <p>Substitute $\lambda = 1+\epsilon + i$ and expand both sides to first order in $\epsilon$. Equating the coefficient of $\epsilon$ of both sides yields:</p> <p>$$\begin{split}\int t\exp(t)\exp(i t) dt &amp;= \exp(t)\exp(it)\left[\frac{1-i}{2}t + \frac{i}{2}\right]\\ &amp; = \exp(t)\left[\frac{\exp(i(t-\frac{\pi}{4}))}{\sqrt{2}}t + \frac{\exp(i(t+\frac{\pi}{2}))}{2}\right] \end{split} $$</p> <p>Finally, take the imaginary part of both sides:</p> <p>$$\int t\exp(t)\sin(t) dt = \exp(t)\left[\frac{\sin\left(t-\frac{\pi}{4}\right)}{\sqrt{2}}t + \frac{\cos(t)}{2}\right]$$</p> Answer 117: <p>This can also be solved another way (a little longer but correct nevertheless and could be more basic and readable for people who just started learning calculus):</p> <p>We will use the results of following(easy - just substitute):</p> <p><span class="math-container">$$ ∫ e^x \sin (x) dx = \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span></p> <p><span class="math-container">$$ ∫ e^{\ln (x)+x}dx = ∫ xe^xdx = xe^x-e^x + C $$</span> and</p> <p><span class="math-container">$$ ∫ e^x \cos (x) dx = \frac{e^x \sin (x) + e^x \cos (x)}{2} + C $$</span></p> <p>Now, back to the main integral. The main problem is that the formula contains not the usual two but three factors(<span class="math-container">$x$</span>, <span class="math-container">$e^x$</span> and <span class="math-container">$\sin (x)$</span>). However, using basic logarithm and <span class="math-container">$\exp $</span> properties we can transform the \exp ression into one with just two factors:</p> <p><span class="math-container">$$ x e^x = e^{\ln (x)} e^x = e^{\ln (x) + x} $$</span></p> <p>this leaves us with</p> <p><span class="math-container">\begin{equation*} I = \int xe^xsinx dx = \int e^{ln(x) + x} \sin (x) dx = \end{equation*}</span></p> <p>integrate by parts with substituting:</p> <p><span class="math-container">$$ \begin{equation*} \left[ \begin{alignedat}{2} u &amp;= \sin (x) \quad &amp; du &amp;=\cos (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] \end{equation*} $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x) -e^x\cos (x)dx $$</span></p> <p><span class="math-container">$$ = e^xx\sin (x)-e^x\sin (x)-∫ xe^x\cos (x)dx + ∫ e^x\cos (x)dx $$</span></p> <p>Now, another three factor integral appears, which can also be integrated by parts using the previous trick.</p> <p><span class="math-container">$$ ∫ xe^x\cos (x)dx = $$</span> <span class="math-container">$$ \begin{equation*} = ∫ e^{\ln (x)+x}\cos (x)dx = \left[ \begin{alignedat}{2} u &amp;= \cos (x) \quad &amp; du &amp;=-\sin (x) \\ dv &amp;= e^{\ln (x)+x} \quad &amp; v &amp;= xe^x-e^x \end{alignedat}\, \right] = \end{equation*} $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ ∫ x e^x \sin (x) dx - ∫ e^x \sin (x) dx = $$</span> <span class="math-container">$$ = xe^x\cos (x)-e^x\cos (x)+ I - ∫ e^x \sin (x) dx $$</span></p> <p>Back to the previous equation, plug in the result above flipping the signs accordingly:</p> <p><span class="math-container">$$ I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x)- I + ∫ e^x \sin (x) dx $$</span> Great! We obtained the wanted integral with negative sign. Let's move it to the other side of equation: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + ∫ e^x\cos (x)dx - xe^x\cos (x)+e^x\cos (x) + ∫ e^x \sin (x) dx $$</span> and plug in the results of simpler integrals: <span class="math-container">$$ 2I = e^xx\sin (x)-e^x\sin (x) + \frac{e^x \sin (x) + e^x \cos (x)}{2} - xe^x\cos (x)+e^x\cos (x) + \frac{e^x \sin (x) - e^x \cos (x)}{2} + C $$</span> now just simplify the right side and divide by 2 to obtain the final result: <span class="math-container">$$ I = \frac {xe^x\sin (x) - xe^x\cos (x) +e^x\cos (x)}{2} + C $$</span></p> Answer 118: <p><strong>By indeterminate coefficients</strong>:</p> <p>A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$.</p> <p>Then we are tempted to try</p> <p>$$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$</p> <p>and</p> <p>$$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$</p> <p>We identify,</p> <p>$$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$</p> <p>and obtain</p> <p>$$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$</p> Answer 119: <p>First integrate $\sin(x) \exp(x),$ by parts, with the exponential as $d v.$ (you will need to do the integration by parts twice), then use the same method for the original integral.</p>
The sum of digits in a 2-digit number
2,082,817
8
https://math.stackexchange.com/questions/2082817/the-sum-of-digits-in-a-2-digit-number
<blockquote> <p>The sum of digits in a two digit number formed by the two digits from $1$ to $9$ is $8$. If $9$ is added to the number then both the digits become equal. Find the number. </p> </blockquote> <p>My attempt:</p> <p>Let the two digit number be $10x+y$ where, $x$ is a digit at tens place and $y$ is the digit at unit's place. According to question:</p> <p>$$x+y=8$$</p> <p>I could not figure out the other condition. Please help. Thanks in advance.</p>
[ { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" }, { "answer_id": 2082823, "score": 14, "text": "<p>First note that $y \\ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p>\n\n<p>Therefore we must have $1 \\le y \\le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p>\n\n<p>\\begin{align*}\n x+y &amp;= 8\\\\\n x+1 &amp;= y-1\n\\end{align*}</p>\n" }, { "answer_id": 2082837, "score": 11, "text": "<p>Let, number is of the form $10a+b$, then according to question:</p>\n\n<p>$a+b=8$ and digits of $10a+b+9$ are equal.</p>\n\n<p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p>\n\n<p>$a+1=b-1\\implies a+2=b$.</p>\n\n<p>Hence, we have $a+2+a=8\\implies a=3\\implies b=5 \\implies 10a+b=35$</p>\n" }, { "answer_id": 2082834, "score": 5, "text": "<p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p>\n" }, { "answer_id": 2083099, "score": 1, "text": "<p>Let the number be $10x + y$. \nAdding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p>\n\n<p>$$10x+y+9=11m$$\nSince $x+y=8$,\n$$9x=11m-17$$</p>\n\n<p>Since x is an integer, \n$$(11m-17)\\%9=0$$\n$$\\Rightarrow 2m\\%9=8$$\n$$\\Rightarrow m=\\{4,8.5,13 \\ldots \\}$$</p>\n\n<p>But since $m$ is a non negative integer less than 9,\n$$m=4$$</p>\n\n<p>And therefore, original number $=44-9=35$.</p>\n" }, { "answer_id": 3906896, "score": 1, "text": "<p>Solving by observation and little mathematics;</p>\n<p>let the number be\n<span class=\"math-container\">$$\\overline{xy}:x,y\\in [1,9] \\text{ and } x,y \\in \\text{Integers}$$</span></p>\n<p>Given, <span class=\"math-container\">$x+y=8$</span> and <span class=\"math-container\">$\\overline{xy} +9=\\overline{zz}$</span></p>\n<p>We know that, <span class=\"math-container\">$\\overline{zz} \\in \\{11,22,33,44,55,66,77,88,99 \\} $</span></p>\n<p>So, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>or, <span class=\"math-container\">$(\\overline{zz}-9=\\overline{xy})\\in \\{13,24,35,55-9,66-9,77-9,88-9,99-9 \\} $</span></p>\n<p>We can see our answer.</p>\n" }, { "answer_id": 3906982, "score": 0, "text": "<p>Let the <span class=\"math-container\">$2$</span>-digit number be <span class=\"math-container\">$\\overline{ab} = 10a + b$</span>.</p>\n<blockquote>\n<p>If <span class=\"math-container\">$9$</span> is added to the number then both the digits become equal.</p>\n</blockquote>\n<p>This means that <span class=\"math-container\">$a = b$</span>, so the number plus <span class=\"math-container\">$9$</span> is <span class=\"math-container\">$10a + a = 11a$</span>, which is always <span class=\"math-container\">$0 \\text{ mod } {11}$</span> (divisible by <span class=\"math-container\">$11$</span>).</p>\n<p>Therefore <span class=\"math-container\">$\\overline{ab} + 9 \\equiv 0 \\text{ mod } {11} \\Rightarrow \\overline{ab} \\equiv 2 \\text{ mod } {11}$</span>.</p>\n<p>But <span class=\"math-container\">$a + b \\equiv 8 \\text{ mod } {11}$</span>. Thus <span class=\"math-container\">$(10a+b)-(a+b) \\equiv 9a \\equiv -2a \\equiv 2-8 \\text{ mod } {11}$</span>, so <span class=\"math-container\">$a \\equiv 3 \\text{ mod } {11}$</span>.</p>\n<p>As <span class=\"math-container\">$a+b = 8$</span>, hence <span class=\"math-container\">$\\overline{ab} = \\boxed{35}$</span>.</p>\n" } ]
Question: The sum of digits in a 2-digit number Answer 1: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 2: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 3: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 4: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 5: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 6: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 7: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 8: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 9: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 10: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 11: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 12: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 13: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 14: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 15: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 16: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 17: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 18: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 19: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 20: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 21: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 22: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 23: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 24: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 25: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 26: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 27: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 28: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 29: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 30: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 31: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 32: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 33: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 34: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 35: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 36: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 37: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 38: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 39: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 40: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 41: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 42: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 43: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 44: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 45: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 46: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 47: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 48: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 49: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 50: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 51: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 52: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 53: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 54: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 55: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 56: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 57: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 58: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 59: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 60: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 61: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 62: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 63: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 64: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 65: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 66: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 67: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 68: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 69: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 70: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 71: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 72: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 73: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 74: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 75: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 76: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 77: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 78: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 79: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 80: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 81: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 82: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 83: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 84: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 85: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 86: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 87: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 88: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 89: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 90: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 91: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 92: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 93: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 94: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 95: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 96: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p> Answer 97: <p>First note that $y \ne 0$ since otherwise we would have $x + y = x + 0 = 8$, and so $10x + y = 80$, but $80$ doesn't satisfy the second condition.</p> <p>Therefore we must have $1 \le y \le 9$. This means that when we add $9$ to $10x + y$, the tens digit must increase by $1$ and the ones digit decreases by $1$. So then $10x + y + 9 = 10(x+1) + (y-1)$. Since the digits are equal, we have $x+1 = y-1$. Now you just have a system of two equations in two variables:</p> <p>\begin{align*} x+y &amp;= 8\\ x+1 &amp;= y-1 \end{align*}</p> Answer 98: <p>Let, number is of the form $10a+b$, then according to question:</p> <p>$a+b=8$ and digits of $10a+b+9$ are equal.</p> <p>Notice that adding $9$ to the give number will increase its tens digit by $1$ and decrease its unit digit by $1$. So, </p> <p>$a+1=b-1\implies a+2=b$.</p> <p>Hence, we have $a+2+a=8\implies a=3\implies b=5 \implies 10a+b=35$</p> Answer 99: <p>The number is $35$, since $$x+y=8$$ and if $9$ is added to any number the ones digit must decrease by $1$ and the tens digit must increase by $1$ if and only if the unit digit is not $0$ hence by adding $9$ $$x+1=y-1$$ $$x-y=-2.$$ By solving both the equations we have $$x=3$$ and $$y=5$$ $$**OR**$$ The numbers whose sum of digits is equal to $8$ the numbers are $$17,26,35,44,53,62,71,80$$ and in these only $35$ is the number whose digits become equal on adding $9$</p> Answer 100: <p>Let the number be $10x + y$. Adding 9 to the number will make both the digits equal. Let that digit be $m$. Then,</p> <p>$$10x+y+9=11m$$ Since $x+y=8$, $$9x=11m-17$$</p> <p>Since x is an integer, $$(11m-17)\%9=0$$ $$\Rightarrow 2m\%9=8$$ $$\Rightarrow m=\{4,8.5,13 \ldots \}$$</p> <p>But since $m$ is a non negative integer less than 9, $$m=4$$</p> <p>And therefore, original number $=44-9=35$.</p> Answer 101: <p>Solving by observation and little mathematics;</p> <p>let the number be <span class="math-container">$$\overline{xy}:x,y\in [1,9] \text{ and } x,y \in \text{Integers}$$</span></p> <p>Given, <span class="math-container">$x+y=8$</span> and <span class="math-container">$\overline{xy} +9=\overline{zz}$</span></p> <p>We know that, <span class="math-container">$\overline{zz} \in \{11,22,33,44,55,66,77,88,99 \} $</span></p> <p>So, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{11-9,22-9,33-9,44-9,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>or, <span class="math-container">$(\overline{zz}-9=\overline{xy})\in \{13,24,35,55-9,66-9,77-9,88-9,99-9 \} $</span></p> <p>We can see our answer.</p> Answer 102: <p>Let the <span class="math-container">$2$</span>-digit number be <span class="math-container">$\overline{ab} = 10a + b$</span>.</p> <blockquote> <p>If <span class="math-container">$9$</span> is added to the number then both the digits become equal.</p> </blockquote> <p>This means that <span class="math-container">$a = b$</span>, so the number plus <span class="math-container">$9$</span> is <span class="math-container">$10a + a = 11a$</span>, which is always <span class="math-container">$0 \text{ mod } {11}$</span> (divisible by <span class="math-container">$11$</span>).</p> <p>Therefore <span class="math-container">$\overline{ab} + 9 \equiv 0 \text{ mod } {11} \Rightarrow \overline{ab} \equiv 2 \text{ mod } {11}$</span>.</p> <p>But <span class="math-container">$a + b \equiv 8 \text{ mod } {11}$</span>. Thus <span class="math-container">$(10a+b)-(a+b) \equiv 9a \equiv -2a \equiv 2-8 \text{ mod } {11}$</span>, so <span class="math-container">$a \equiv 3 \text{ mod } {11}$</span>.</p> <p>As <span class="math-container">$a+b = 8$</span>, hence <span class="math-container">$\overline{ab} = \boxed{35}$</span>.</p>
finding the center of a circle (elementary geometry)
2,083,673
8
https://math.stackexchange.com/questions/2083673/finding-the-center-of-a-circle-elementary-geometry
<p>I am considering the problem of finding the center of a given circle C of radius r. I currently know 3 elementary ways (ruler+compass construction) of doing this : - choose 2 points on the circle, draw the bisector, and draw another bisector involving a third point on the circle - choose 2 points on the circle, draw the bisector, get a diameter then find the middle point - choose 2 points on the circle, draw a chord, then a perpendicular line and get a diameter (Thales theorem).</p> <p>I was looking for other elementary method (not involving trigonometry/analytic geometry). Then I heard something quite confusing for me : with two points on the circle C, call them A and B and "close enough", draw the circle C1 of center A and radius AB, and the circle C2 of center B and radius AB, they met at D('inner' wrt the circle C) and E('outer' wrt the circle C). Call F the other intersection of the circles C and C1, then draw (FD) which intersects the circle C at G; it seems that the circle of center G and radius r contains the three points: B, D and the center of the circle C. </p> <p>I have no idea if this is possible to prove this property using only elementary things. Any hint please? Comments are welcome (and please excuse my not-so-good english :/ )</p>
[ { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" }, { "answer_id": 2085523, "score": 11, "text": "<p>Based on the described construction and using the OP's notations, we have the following figure:</p>\n<p><a href=\"https://i.stack.imgur.com/rsaYo.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/rsaYo.png\" alt=\"![enter image description here\" /></a></p>\n<p>It is enough to demonstrate that <span class=\"math-container\">$DG=r$</span>, the radius of the circle.</p>\n<p>By the construction, the triangle <span class=\"math-container\">$ABD$</span> is equilateral and then <span class=\"math-container\">$AB=BD=AF$</span>. <span class=\"math-container\">$BDG$</span> is an isosceles triangle because the triangles <span class=\"math-container\">$ABG$</span> and <span class=\"math-container\">$ADG$</span> are congruent. <span class=\"math-container\">$\\angle AGD = \\angle BGA$</span> and <span class=\"math-container\">$AB=AD=AF$</span>.</p>\n<p>Now, connecting <span class=\"math-container\">$A$</span> and <span class=\"math-container\">$F$</span> with the center <span class=\"math-container\">$O$</span> we get the green triangle <span class=\"math-container\">$AFO$</span>. To show that <span class=\"math-container\">$DG=FO=r$</span>, it is enough to prove that the triangles <span class=\"math-container\">$BDG$</span> and <span class=\"math-container\">$AFO$</span> are congruent.</p>\n<p>We have already seen that the bases of these triangles equal. What remains is to show that <span class=\"math-container\">$\\angle AOF = \\angle BGD$</span>.</p>\n<p>This is easy. Just look at the figure and you will see that the <span class=\"math-container\">$\\angle BDG =2\\times\\angle AGD = \\angle AOF$</span>. (See the Wiki article called <a href=\"https://en.wikipedia.org/wiki/Inscribed_angle\" rel=\"nofollow noreferrer\">Inscribed angle</a>.)</p>\n<hr />\n<p>Finally, this is how to use the construction at stake to find the center of the circle:</p>\n<p><a href=\"https://i.stack.imgur.com/YL4O1.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/YL4O1.png\" alt=\"enter image description here\" /></a></p>\n<ol>\n<li>Construct <span class=\"math-container\">$E$</span>, <span class=\"math-container\">$F$</span>, and <span class=\"math-container\">$D$</span> and then <span class=\"math-container\">$G$</span> as described.</li>\n<li>The straight <span class=\"math-container\">$ED$</span> and the circle centered at <span class=\"math-container\">$G$</span> through <span class=\"math-container\">$D$</span> will meet in <span class=\"math-container\">$O$</span>.</li>\n</ol>\n<hr />\n<p>Note</p>\n<p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span> which already exist after constructing the first bisecting perpendicular.</p>\n" }, { "answer_id": 2952354, "score": 0, "text": "<p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p>\n\n<p>Here is one approach: take <span class=\"math-container\">$A, B$</span>, and <span class=\"math-container\">$D$</span> as above. Construct a line through <span class=\"math-container\">$A$</span> which bisects <span class=\"math-container\">$BD$</span>. Let the point where it crosses the circle be <span class=\"math-container\">$G'$</span>. Note the construction forces angle <span class=\"math-container\">$AG'B$</span> to equal angle <span class=\"math-container\">$AG'D$</span>. Draw a line thru <span class=\"math-container\">$G'$</span> and <span class=\"math-container\">$D$</span> and call the point where this intersects the circle <span class=\"math-container\">$F'$</span>. The arc <span class=\"math-container\">$AF'$</span> equals the arc <span class=\"math-container\">$BA$</span> because angle <span class=\"math-container\">$AG'B$</span> equals angle <span class=\"math-container\">$AG'F'$</span>. But this forces <span class=\"math-container\">$F'=F$</span>. But then <span class=\"math-container\">$G'$</span> is the point where the circle meets the line through <span class=\"math-container\">$F$</span> and <span class=\"math-container\">$D$</span>. So <span class=\"math-container\">$G'=G$</span>.</p>\n\n<p>Perhaps there is a simpler way.</p>\n" } ]
Question: finding the center of a circle (elementary geometry) Answer 1: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 2: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 3: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 4: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 5: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 6: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 7: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 8: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 9: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 10: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 11: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 12: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 13: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 14: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 15: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 16: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 17: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 18: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 19: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 20: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 21: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 22: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 23: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 24: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 25: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 26: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 27: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 28: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 29: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 30: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 31: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 32: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p> Answer 33: <p>Based on the described construction and using the OP's notations, we have the following figure:</p> <p><a href="https://i.stack.imgur.com/rsaYo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rsaYo.png" alt="![enter image description here" /></a></p> <p>It is enough to demonstrate that <span class="math-container">$DG=r$</span>, the radius of the circle.</p> <p>By the construction, the triangle <span class="math-container">$ABD$</span> is equilateral and then <span class="math-container">$AB=BD=AF$</span>. <span class="math-container">$BDG$</span> is an isosceles triangle because the triangles <span class="math-container">$ABG$</span> and <span class="math-container">$ADG$</span> are congruent. <span class="math-container">$\angle AGD = \angle BGA$</span> and <span class="math-container">$AB=AD=AF$</span>.</p> <p>Now, connecting <span class="math-container">$A$</span> and <span class="math-container">$F$</span> with the center <span class="math-container">$O$</span> we get the green triangle <span class="math-container">$AFO$</span>. To show that <span class="math-container">$DG=FO=r$</span>, it is enough to prove that the triangles <span class="math-container">$BDG$</span> and <span class="math-container">$AFO$</span> are congruent.</p> <p>We have already seen that the bases of these triangles equal. What remains is to show that <span class="math-container">$\angle AOF = \angle BGD$</span>.</p> <p>This is easy. Just look at the figure and you will see that the <span class="math-container">$\angle BDG =2\times\angle AGD = \angle AOF$</span>. (See the Wiki article called <a href="https://en.wikipedia.org/wiki/Inscribed_angle" rel="nofollow noreferrer">Inscribed angle</a>.)</p> <hr /> <p>Finally, this is how to use the construction at stake to find the center of the circle:</p> <p><a href="https://i.stack.imgur.com/YL4O1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YL4O1.png" alt="enter image description here" /></a></p> <ol> <li>Construct <span class="math-container">$E$</span>, <span class="math-container">$F$</span>, and <span class="math-container">$D$</span> and then <span class="math-container">$G$</span> as described.</li> <li>The straight <span class="math-container">$ED$</span> and the circle centered at <span class="math-container">$G$</span> through <span class="math-container">$D$</span> will meet in <span class="math-container">$O$</span>.</li> </ol> <hr /> <p>Note</p> <p>This construction is not that bad compared to the most frequently used one in the case of which you construct two bisecting perpendiculars. In the case of this present construction it is enough to draw the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span> which already exist after constructing the first bisecting perpendicular.</p> Answer 34: <p>Filling in on the main answer. It takes an argument to show that the triangle BGD is isosceles. We can't apply any usual triangle congruence theorem that I can see.</p> <p>Here is one approach: take <span class="math-container">$A, B$</span>, and <span class="math-container">$D$</span> as above. Construct a line through <span class="math-container">$A$</span> which bisects <span class="math-container">$BD$</span>. Let the point where it crosses the circle be <span class="math-container">$G'$</span>. Note the construction forces angle <span class="math-container">$AG'B$</span> to equal angle <span class="math-container">$AG'D$</span>. Draw a line thru <span class="math-container">$G'$</span> and <span class="math-container">$D$</span> and call the point where this intersects the circle <span class="math-container">$F'$</span>. The arc <span class="math-container">$AF'$</span> equals the arc <span class="math-container">$BA$</span> because angle <span class="math-container">$AG'B$</span> equals angle <span class="math-container">$AG'F'$</span>. But this forces <span class="math-container">$F'=F$</span>. But then <span class="math-container">$G'$</span> is the point where the circle meets the line through <span class="math-container">$F$</span> and <span class="math-container">$D$</span>. So <span class="math-container">$G'=G$</span>.</p> <p>Perhaps there is a simpler way.</p>
Why is $\int_{0}^{1}x^x(1-x)^{2-x}\sin(x\pi)dx=\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx?$
2,084,529
8
https://math.stackexchange.com/questions/2084529/why-is-int-01xx1-x2-x-sinx-pidx-int-01x1x1-x1-x-si
<p>I don't know why these two integrals yield the same results. </p> <p>$$\int_{0}^{1}x^x(1-x)^{2-x}\sin(x\pi)dx=\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx$$</p> <p>Any hints, clues and ideas how to go about dealing these integrals(showing that they are the same and determine the closed form)?</p> <p>$$\int_{0}^{1}x^x(1-x)^{2-x}\sin(x\pi)dx=\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx$$I know it is similar to the sophere's dream integral. I think you can possible use integration by parts. Because there are three functions are involved applying by parts it would be very long.</p> <p>I try to make a substitution of $u=1-x$</p> <p>$$-\int_{0}^{1}u^{1+u}(1-u)^{1-u}\sin((u+1)\pi)du$$</p> <p>Q: show that they are the same and evaluate its closed form.</p> <p>Edited</p> <p>$$\int_{0}^{1}u^{1+u}(1-u)^{1-u}\sin(u\pi)du$$</p> <hr> <p>Let try and applying by parts</p> <p>$u=x^x(1-x)^{2-x}$</p> <p>$du=x^x(1-x)^{2-x}\ln{x\over 2-x}dx$</p> <p>$v=-{\cos{(x\pi)}\over \pi}$</p> <p>$$=-{1\over \pi}x^x(1-x)^{2-x}\cos{(x\pi)}+{1\over \pi}\int_{0}^{1}x^x(1-x)^{2-x}\cos{(x\pi)}\ln{x\over 2-x}dx$$</p> <p>This looked more complicate than before, so ideally, no, it is not a good way of tackling this problem.</p>
[ { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" }, { "answer_id": 2096712, "score": 11, "text": "<p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p>\n\n<p>\\begin{align}\\int_{0}^{1}x^x(1-x)^{2-x}&amp;\\sin(x\\pi)dx-\\int_{0}^{1}x^{1+x}(1-x)^{1-x}\\sin(x\\pi)dx \\\\&amp;= \\int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\\sin(x\\pi)dx\\\\\n&amp;=\n\\int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi)dx\\end{align}</p>\n\n<p>Let's denote the integrand by $f$:\n$$f(x)=x^x(1-x)^{1-x}(1-2x)\\sin(x\\pi).$$</p>\n\n<p>The function $f$ appears to have a punctual symmetry:\n\\begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\\sin((1-x)\\pi)\\\\ &amp;=-x^x (1-x)^{1-x}\\sin(\\pi x) &amp; \\\\ &amp;= -f(x)\\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p>\n\n<hr>\n\n<p>The punctual symmetry can be guess from the graph of $f$ represented below:</p>\n\n<p><a href=\"https://i.stack.imgur.com/bJYVO.png\"><img src=\"https://i.stack.imgur.com/bJYVO.png\" alt=\"enter image description here\"></a></p>\n\n<p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p>\n" }, { "answer_id": 2098103, "score": 7, "text": "<p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p>\n\n<p>$$\\int_0^1 x^{1+x}(1-x)^{1-x}\\sin(x\\pi) \\; dx = \\int_1^0 (1-v)^{1+(1-v)} v^v \\sin((1-v)\\pi) \\; (-dv).$$</p>\n\n<p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\\sin(\\pi -A) = \\sin A$, so the above is</p>\n\n<p>$$\\int_0^1 (1-v)^{2-v}v^v \\sin(v\\pi) \\; dv,$$</p>\n\n<p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>\n" } ]
Question: Why is $\int_{0}^{1}x^x(1-x)^{2-x}\sin(x\pi)dx=\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx?$ Answer 1: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 2: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 3: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 4: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 5: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 6: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 7: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 8: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 9: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 10: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 11: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 12: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 13: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 14: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 15: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 16: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 17: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 18: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 19: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 20: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 21: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 22: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 23: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 24: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 25: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 26: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 27: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 28: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 29: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 30: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 31: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 32: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p> Answer 33: <p>The equality can be shown by computing the integral of the difference, and observing a punctual symmetry around $(1/2,0)$.</p> <p>\begin{align}\int_{0}^{1}x^x(1-x)^{2-x}&amp;\sin(x\pi)dx-\int_{0}^{1}x^{1+x}(1-x)^{1-x}\sin(x\pi)dx \\&amp;= \int_{0}^{1}x^x((1-x)^{2-x}-x(1-x)^{1-x})\sin(x\pi)dx\\ &amp;= \int_{0}^{1}x^x(1-x)^{1-x}(1-2x)\sin(x\pi)dx\end{align}</p> <p>Let's denote the integrand by $f$: $$f(x)=x^x(1-x)^{1-x}(1-2x)\sin(x\pi).$$</p> <p>The function $f$ appears to have a punctual symmetry: \begin{align}f(1-x)&amp;=(1-x)^{1-x}(1-(1-x))^{1-(1-x)}(1-2(1-x))\sin((1-x)\pi)\\ &amp;=-x^x (1-x)^{1-x}\sin(\pi x) &amp; \\ &amp;= -f(x)\end{align} so the point $(1/2,0)$ is a point of symmetry of the graph of $f$, hence the integral over $[0,1]$ (where $f$ is defined) is $0$.</p> <hr> <p>The punctual symmetry can be guess from the graph of $f$ represented below:</p> <p><a href="https://i.stack.imgur.com/bJYVO.png"><img src="https://i.stack.imgur.com/bJYVO.png" alt="enter image description here"></a></p> <p>Note that the equality also holds for integral boundaries in $[0,1]$ and centered around $1/2$.</p> Answer 34: <p>In the second integral, make the substitution $v=1-x$, so that $x=1-v$ and $dx = -dv$. We get</p> <p>$$\int_0^1 x^{1+x}(1-x)^{1-x}\sin(x\pi) \; dx = \int_1^0 (1-v)^{1+(1-v)} v^v \sin((1-v)\pi) \; (-dv).$$</p> <p>The minus sign on $dv$ reverses the order of integration, and we have the identity $\sin(\pi -A) = \sin A$, so the above is</p> <p>$$\int_0^1 (1-v)^{2-v}v^v \sin(v\pi) \; dv,$$</p> <p>but since $v$ is a dummy variable, we can change it to $x$ and we have the first integral.</p>
A Vi&#233;te-like infinite product for $\sin (\pi/7)$
2,084,773
8
https://math.stackexchange.com/questions/2084773/a-vi%c3%a9te-like-infinite-product-for-sin-pi-7
<p>In the article "The unruly $\sin (\pi/7)$ of Samuel Moreno, the following infinite product is given: <a href="https://i.stack.imgur.com/6JOVz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6JOVz.png" alt="enter image description here"></a></p> <p>Also, the same article shows that $\sin (\pi/7)$ is equal to the infinite nested radical</p> <p>$ \sin (\pi/7) = \frac{1}{2}\sqrt{2 - \sqrt{2 - \sqrt{2 - \sqrt{2 + \sqrt{2 - \sqrt{2 - \sqrt{2 + \sqrt{2 - \sqrt{2 - \sqrt{2+...}}}}}}}}}}$ </p> <p>The question is: are these two expressions fundamentally different for $\sin(\pi/7)$, or is it possible to rewritte the infinite nested radical into that infinite product by simple algebraic manipulations?</p>
[ { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" }, { "answer_id": 2089595, "score": 2, "text": "<p>Let:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\small \\sqrt{2} \\,\\sqrt{2-\\sqrt{2}} \\,\\sqrt{2+\\sqrt{2-\\sqrt{2}}} \\normalsize\\right) \\times \\\\[2mm] \n&amp; \\space \\left(\\small \\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2}}}} \\,\\sqrt{2\\color{red}{-}\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}} \\,\\color{red}{\\sqrt{2+ \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2}}}}}}}} \\normalsize\\right) \\times \\cdots \\\\[4mm] \n\\LARGE{\\sigma}\\normalsize &amp;= \\color{red}{\\sqrt{2- \\color{black}{\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{\\cdots}}}}}}}}}} \\space\\space\\small\\{\\text{Same as last item of}\\,\\rho\\,\\text{with minus sign}\\} \n\\end{align} \n$$\nFundamentally, the two expressions are the same. <br/>\nAnd to transpose between them, we use the property:\n$$ \n\\boxed{ \\quad \\color{red}{\\Large{\\rho \\cdot \\sigma = 2}} \\quad } \\\\[4mm] \n\\frac{1}{\\rho} = \\frac{1}{\\rho} \\color{red}{\\cdot \\frac{\\sigma}{\\sigma}} = \\frac{\\sigma}{\\rho\\cdot\\sigma} = \\frac{\\sigma}{2} = \\quad\\sin\\left(\\frac{\\pi}{7}\\right)\\quad = \\frac{\\sigma}{2} = \\frac{\\sigma}{2} \\color{red}{\\cdot \\frac{\\rho}{\\rho}} = \\frac{\\rho\\cdot\\sigma/2}{\\rho} = \\frac{1}{\\rho}\n$$\n <br/>\nIn order to trace the simplifying process, let us consider the definition of both $\\left(\\large{\\rho}\\right)$ and $\\left(\\large{\\sigma}\\right)$ upto $n$, <br/>\nWhere $\\left(\\large{\\rho}\\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above:\n$$ \n\\begin{align} \n\\LARGE{\\rho}\\normalsize &amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\\\[2mm] \n&amp;= \\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\sqrt{2\\color{red}{-}\\rho_{\\small 3n-1}}\\,\\sqrt{2\\color{red}{-}\\rho_{\\small 3n}}\\,\\sqrt{2\\color{red}{+}\\rho_{\\small 3n+1}}\\right)\n\\end{align} \n$$\nAnd $\\left(\\large{\\sigma}\\right)$ equals <strong>same as last item of</strong> $\\large{\\rho}\\,$ <strong>with minus sign</strong>, as mentioned above:\n$$ \\LARGE{\\sigma}\\normalsize = \\sqrt{2\\color{red}{-}\\rho_{\\small 3n+1}} \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$\nSo:\n$$ \n\\begin{align} \n&amp; \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right) \\,\\sqrt{2-\\rho_{\\small 3n+1}} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{2+\\rho_{\\small 3n+1}}\\,\\sqrt{2-\\rho_{\\small 3n+1}} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\rho_{\\small 3n+1}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{2-\\rho_{\\small 3n}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n}\\right)} = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\rho_{\\small 3n}^2} \\\\[2mm] \n&amp; \\qquad = \\sqrt{2-\\rho_{\\small 3n-1}}\\,\\sqrt{4-\\left(2-\\rho_{\\small 3n-1}\\right)} = \\sqrt{4-\\rho_{\\small 3n-1}^2} = \\sqrt{4-\\left(2+\\rho_{\\small 3n-2}\\right)} = \\color{red}{\\sqrt{2-\\rho_{\\small 3n-2}}}\n\\end{align} \n$$\nThus:\n$$ \n\\begin{align} \n\\color{red}{\\Large{\\rho \\cdot \\sigma}}\\normalsize &amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right) \\left(\\large{\\rho}_{\\normalsize 3n}\\,\\large{\\rho}_{\\normalsize 3n+1}\\,\\large{\\rho}_{\\normalsize 3n+2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n+1}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\left(\\large{\\rho}_{\\normalsize 3n-3}\\,\\large{\\rho}_{\\normalsize 3n-2}\\,\\large{\\rho}_{\\normalsize 3n-1}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-2}} \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right) \\left(\\large{\\rho}_{\\normalsize 3}\\,\\large{\\rho}_{\\normalsize 4}\\,\\large{\\rho}_{\\normalsize 5}\\right) \\cdots \\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 3n-5}} \\\\[2mm] \n&amp;= \\quad \\cdots \\quad \\cdots \\quad \\cdots \\\\[2mm] \n&amp;= \\left[\\left(\\large{\\rho}_{\\normalsize 0}\\,\\large{\\rho}_{\\normalsize 1}\\,\\large{\\rho}_{\\normalsize 2}\\right)\\right]\\,\\sqrt{2-\\large{\\rho}_{\\normalsize 1}} = \\large{\\rho}_{\\normalsize 0}\\normalsize\\,\\sqrt{4-\\rho_{\\small 0}^2} \\normalsize = \\sqrt{2}\\,\\sqrt{4-2} = \\color{red}{2 \\quad \\forall\\space n \\ge 1}\n\\end{align} \n$$\nHence:\n$$ \\lim_{n\\rightarrow\\infty}\\frac{1}{\\rho} = \\lim_{n\\rightarrow\\infty}\\frac{\\sigma}{2} = \\sin\\left(\\frac{\\pi}{7}\\right) \\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad $$</p>\n\n<hr>\n\n<p>In order use this technique in general, for a period of length $\\color{red}{m}$, we have: \n$$ \n\\begin{align} \n\\color{red}{{\\large\\rho}_{-1}=0},\\,\\, {\\large\\rho}_{0} &amp;= \\sqrt{2\\pm{\\rho}_{-1}},\\,\\, {\\large\\rho}_{1}=\\sqrt{2\\pm{\\rho}_{0}},\\,\\, {\\large\\rho}_{2}=\\sqrt{2\\pm{\\rho}_{1}},\\,\\, \\cdots ,\\,\\, {\\large\\rho}_{m-1}=\\sqrt{2\\pm{\\rho}_{m-2}},\\,\\, \\cdots \\\\[2mm] \n{\\Large\\rho} &amp;= \\prod_{k=0}^{n}\\left( {\\large\\rho}_{mk}\\, {\\large\\rho}_{mk+1}\\, \\cdots\\, {\\large\\rho}_{mk+m-2}\\, {\\large\\rho}_{mk+m-1} \\right) \\qquad \\\\[2mm] \n&amp;= \\prod_{k=0}^{n}\\left( \\sqrt{2\\pm{\\large\\rho}_{mk-1}}\\, \\sqrt{2\\pm{\\large\\rho}_{mk}}\\, \\cdots\\, \\sqrt{2\\pm{\\large\\rho}_{mk+m-3}}\\, \\sqrt{2\\color{red}{\\pm}{\\large\\rho}_{mk+m-2}} \\right) \\\\[2mm] \n{\\Large\\sigma} &amp;= \\sqrt{2\\color{red}{\\mp}{\\large\\rho}_{mn+m-2}}\n\\end{align} \n$$ \nHence, $\\,\\left({\\large\\sigma}\\right)\\,$ <strong>must start with a different sign from the actual period!</strong> <br/> \nNevertheless, if $\\,{\\large\\sigma}\\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\\,{\\large\\sigma}\\,$. <br/> <br/> \nConsider the following example $\\left\\{\\,\\text{period}\\,\\left(+,+,-,-,-\\right)\\,\\right\\}$: \n$$ \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \n\\end{align} \n$$ \nAnd as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> \nWe have two options to convert it into applicable format:\n$$ \n\\color{blue}{\\underline{\\left({\\bf1}\\right) - \\,\\text{Convert to}\\, \\sin(\\pi/11)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\sqrt{1-\\cos^2\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{4-\\left[2\\cos\\left(\\frac{\\pi}{11}\\right)\\right]^2} \\Rightarrow \\\\[2mm] \n\\sin\\left(\\frac{\\pi}{11}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{-}\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{+}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{-}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\underbrace{\\color{blue}{+}{\\large/}+,-,-,-,} \\color{blue}{+}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2-{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{+}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{-}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\sin\\left(\\frac{\\pi}{11}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{1-\\left(1/{\\large\\rho}\\right)^2} \\\\[4mm] \n$$ \n$$ \n\\color{blue}{\\underline{\\left({\\bf2}\\right) - \\,\\text{Convert to}\\, \\cos(\\pi/22)}} \\space\\quad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\\\[2mm] \n\\begin{align} \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\sqrt{\\frac12+\\frac12\\,\\cos\\left(\\frac{\\pi}{11}\\right)} = \\frac12\\,\\sqrt{2+2\\,\\cos\\left(\\frac{\\pi}{11}\\right)} \\Rightarrow \\\\[2mm] \n\\cos\\left(\\frac{\\pi}{22}\\right) &amp;= \\frac12\\,\\sqrt{2\\color{red}{+}\\sqrt{2+\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2\\color{blue}{-}\\sqrt{\\cdots}}}}}}} \\\\[2mm] \n\\implies \\qquad &amp;\\space \\underbrace{\\color{red}{+}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\underbrace{\\color{blue}{-}{\\large/}+,+,-,-,} \\color{blue}{-}{\\large/}\\cdots \\\\[2mm] \n\\end{align} \\\\[2mm] \n{\\Large\\rho} = \\prod_{k=0}^{n}\\left( \\sqrt{2-{\\large\\rho}_{5k-1}}\\, \\sqrt{2-{\\large\\rho}_{5k\\space\\space\\space\\space}}\\, \\sqrt{2+{\\large\\rho}_{5k+1}}\\, \\sqrt{2+{\\large\\rho}_{5k+2}}\\, \\sqrt{2\\color{red}{-}{\\large\\rho}_{5k+3}} \\right) \\\\[2mm] \n{\\Large\\sigma} = \\sqrt{2\\color{red}{+}{\\large\\rho}_{5n+3}}\\,,\\quad {\\Large\\rho}\\cdot{\\Large\\sigma}=2\\,,\\quad \\cos\\left(\\frac{\\pi}{22}\\right)=\\frac{{\\large\\sigma}}{2}=\\frac{1}{{\\large\\rho}}\\,,\\quad \\cos\\left(\\frac{\\pi}{11}\\right)=\\sqrt{\\frac12+\\frac12\\,\\left(1/{\\large\\rho}\\right)} \\\\[2mm] \n$$ \nAnd here are Mathematica codes for these two options: <br/> </p>\n\n<pre>\n<code>\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Sin[Pi/11]]]\n\nn = 4;\nr = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}];\ns = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x];\nPrint[N[r s], \" \", N[1/r], \" \", N[s/2], \" \", N[Cos[Pi/22]]]\n</code>\n</pre>\n" }, { "answer_id": 4005275, "score": 1, "text": "<p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href=\"https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions\">here</a>.</p>\n<p>In this perspective <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be solved as follows</p>\n<p><span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> = <span class=\"math-container\">$\\sqrt{2-2\\cos\\frac{2\\pi}{7}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2+2\\cos\\frac{4\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-2\\cos\\frac{3\\pi}{7}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{6\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-2\\cos\\frac{\\pi}{7}}}}$</span> = <span class=\"math-container\">$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+2\\cos\\frac{2\\pi}{7}}}}}$</span></p>\n<p>One can observe repetition of <span class=\"math-container\">$2\\cos\\frac{2\\pi}{7}$</span> subsequently and <span class=\"math-container\">$\\therefore$</span> <span class=\"math-container\">$2\\sin\\frac{\\pi}{7}$</span> can be represented simply as</p>\n<blockquote>\n<p><strong><span class=\"math-container\">$$\\sqrt{2-\\sqrt{2-\\sqrt{2-\\sqrt{2+\\sqrt{2-\\sqrt{2-\\sqrt{2+...2\\cos\\frac{2\\pi}{7}}}}}}}}$$</span> with repeating pattern <span class=\"math-container\">$--+$</span> infinitely and first <span class=\"math-container\">$-$</span> as single nonrepeating sign</strong></p>\n</blockquote>\n<p>But getting infinite product has to be worked out</p>\n" } ]
Question: A Vi&#233;te-like infinite product for $\sin (\pi/7)$ Answer 1: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 2: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 3: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 4: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 5: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 6: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 7: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 8: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 9: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 10: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 11: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 12: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 13: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 14: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 15: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 16: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 17: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 18: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 19: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 20: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 21: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 22: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 23: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 24: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 25: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 26: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 27: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 28: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 29: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 30: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 31: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 32: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p> Answer 33: <p>Let: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\small \sqrt{2} \,\sqrt{2-\sqrt{2}} \,\sqrt{2+\sqrt{2-\sqrt{2}}} \normalsize\right) \times \\[2mm] &amp; \space \left(\small \sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2}}}} \,\sqrt{2\color{red}{-}\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}} \,\color{red}{\sqrt{2+ \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2}}}}}}}} \normalsize\right) \times \cdots \\[4mm] \LARGE{\sigma}\normalsize &amp;= \color{red}{\sqrt{2- \color{black}{\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{\cdots}}}}}}}}}} \space\space\small\{\text{Same as last item of}\,\rho\,\text{with minus sign}\} \end{align} $$ Fundamentally, the two expressions are the same. <br/> And to transpose between them, we use the property: $$ \boxed{ \quad \color{red}{\Large{\rho \cdot \sigma = 2}} \quad } \\[4mm] \frac{1}{\rho} = \frac{1}{\rho} \color{red}{\cdot \frac{\sigma}{\sigma}} = \frac{\sigma}{\rho\cdot\sigma} = \frac{\sigma}{2} = \quad\sin\left(\frac{\pi}{7}\right)\quad = \frac{\sigma}{2} = \frac{\sigma}{2} \color{red}{\cdot \frac{\rho}{\rho}} = \frac{\rho\cdot\sigma/2}{\rho} = \frac{1}{\rho} $$ <br/> In order to trace the simplifying process, let us consider the definition of both $\left(\large{\rho}\right)$ and $\left(\large{\sigma}\right)$ upto $n$, <br/> Where $\left(\large{\rho}\right)$ definition is consist of <strong>three square-roots at a time</strong>, as illustrated above: $$ \begin{align} \LARGE{\rho}\normalsize &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \\[2mm] &amp;= \left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\sqrt{2\color{red}{-}\rho_{\small 3n-1}}\,\sqrt{2\color{red}{-}\rho_{\small 3n}}\,\sqrt{2\color{red}{+}\rho_{\small 3n+1}}\right) \end{align} $$ And $\left(\large{\sigma}\right)$ equals <strong>same as last item of</strong> $\large{\rho}\,$ <strong>with minus sign</strong>, as mentioned above: $$ \LARGE{\sigma}\normalsize = \sqrt{2\color{red}{-}\rho_{\small 3n+1}} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$ So: $$ \begin{align} &amp; \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right) \,\sqrt{2-\rho_{\small 3n+1}} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{2+\rho_{\small 3n+1}}\,\sqrt{2-\rho_{\small 3n+1}} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\rho_{\small 3n+1}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{2-\rho_{\small 3n}}\,\sqrt{4-\left(2-\rho_{\small 3n}\right)} = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\rho_{\small 3n}^2} \\[2mm] &amp; \qquad = \sqrt{2-\rho_{\small 3n-1}}\,\sqrt{4-\left(2-\rho_{\small 3n-1}\right)} = \sqrt{4-\rho_{\small 3n-1}^2} = \sqrt{4-\left(2+\rho_{\small 3n-2}\right)} = \color{red}{\sqrt{2-\rho_{\small 3n-2}}} \end{align} $$ Thus: $$ \begin{align} \color{red}{\Large{\rho \cdot \sigma}}\normalsize &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right) \left(\large{\rho}_{\normalsize 3n}\,\large{\rho}_{\normalsize 3n+1}\,\large{\rho}_{\normalsize 3n+2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n+1}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \left(\large{\rho}_{\normalsize 3n-3}\,\large{\rho}_{\normalsize 3n-2}\,\large{\rho}_{\normalsize 3n-1}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-2}} \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right) \left(\large{\rho}_{\normalsize 3}\,\large{\rho}_{\normalsize 4}\,\large{\rho}_{\normalsize 5}\right) \cdots \right]\,\sqrt{2-\large{\rho}_{\normalsize 3n-5}} \\[2mm] &amp;= \quad \cdots \quad \cdots \quad \cdots \\[2mm] &amp;= \left[\left(\large{\rho}_{\normalsize 0}\,\large{\rho}_{\normalsize 1}\,\large{\rho}_{\normalsize 2}\right)\right]\,\sqrt{2-\large{\rho}_{\normalsize 1}} = \large{\rho}_{\normalsize 0}\normalsize\,\sqrt{4-\rho_{\small 0}^2} \normalsize = \sqrt{2}\,\sqrt{4-2} = \color{red}{2 \quad \forall\space n \ge 1} \end{align} $$ Hence: $$ \lim_{n\rightarrow\infty}\frac{1}{\rho} = \lim_{n\rightarrow\infty}\frac{\sigma}{2} = \sin\left(\frac{\pi}{7}\right) \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad $$</p> <hr> <p>In order use this technique in general, for a period of length $\color{red}{m}$, we have: $$ \begin{align} \color{red}{{\large\rho}_{-1}=0},\,\, {\large\rho}_{0} &amp;= \sqrt{2\pm{\rho}_{-1}},\,\, {\large\rho}_{1}=\sqrt{2\pm{\rho}_{0}},\,\, {\large\rho}_{2}=\sqrt{2\pm{\rho}_{1}},\,\, \cdots ,\,\, {\large\rho}_{m-1}=\sqrt{2\pm{\rho}_{m-2}},\,\, \cdots \\[2mm] {\Large\rho} &amp;= \prod_{k=0}^{n}\left( {\large\rho}_{mk}\, {\large\rho}_{mk+1}\, \cdots\, {\large\rho}_{mk+m-2}\, {\large\rho}_{mk+m-1} \right) \qquad \\[2mm] &amp;= \prod_{k=0}^{n}\left( \sqrt{2\pm{\large\rho}_{mk-1}}\, \sqrt{2\pm{\large\rho}_{mk}}\, \cdots\, \sqrt{2\pm{\large\rho}_{mk+m-3}}\, \sqrt{2\color{red}{\pm}{\large\rho}_{mk+m-2}} \right) \\[2mm] {\Large\sigma} &amp;= \sqrt{2\color{red}{\mp}{\large\rho}_{mn+m-2}} \end{align} $$ Hence, $\,\left({\large\sigma}\right)\,$ <strong>must start with a different sign from the actual period!</strong> <br/> Nevertheless, if $\,{\large\sigma}\,$ is not comply with this condition, we still have options to write the product form of related/equivalent function by applying simple manipulations to the original $\,{\large\sigma}\,$. <br/> <br/> Consider the following example $\left\{\,\text{period}\,\left(+,+,-,-,-\right)\,\right\}$: $$ \begin{align} \cos\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \end{align} $$ And as it appears, after isolating the first sign, the pattern is not obeying the condition. <br/> We have two options to convert it into applicable format: $$ \color{blue}{\underline{\left({\bf1}\right) - \,\text{Convert to}\, \sin(\pi/11)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \sin\left(\frac{\pi}{11}\right) &amp;= \sqrt{1-\cos^2\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{4-\left[2\cos\left(\frac{\pi}{11}\right)\right]^2} \Rightarrow \\[2mm] \sin\left(\frac{\pi}{11}\right) &amp;= \frac12\,\sqrt{2\color{red}{-}\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{+}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{-}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \underbrace{\color{blue}{+}{\large/}+,-,-,-,} \color{blue}{+}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2-{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{+}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{-}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \sin\left(\frac{\pi}{11}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{1-\left(1/{\large\rho}\right)^2} \\[4mm] $$ $$ \color{blue}{\underline{\left({\bf2}\right) - \,\text{Convert to}\, \cos(\pi/22)}} \space\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\[2mm] \begin{align} \cos\left(\frac{\pi}{22}\right) &amp;= \sqrt{\frac12+\frac12\,\cos\left(\frac{\pi}{11}\right)} = \frac12\,\sqrt{2+2\,\cos\left(\frac{\pi}{11}\right)} \Rightarrow \\[2mm] \cos\left(\frac{\pi}{22}\right) &amp;= \frac12\,\sqrt{2\color{red}{+}\sqrt{2+\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2\color{blue}{-}\sqrt{\cdots}}}}}}} \\[2mm] \implies \qquad &amp;\space \underbrace{\color{red}{+}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \underbrace{\color{blue}{-}{\large/}+,+,-,-,} \color{blue}{-}{\large/}\cdots \\[2mm] \end{align} \\[2mm] {\Large\rho} = \prod_{k=0}^{n}\left( \sqrt{2-{\large\rho}_{5k-1}}\, \sqrt{2-{\large\rho}_{5k\space\space\space\space}}\, \sqrt{2+{\large\rho}_{5k+1}}\, \sqrt{2+{\large\rho}_{5k+2}}\, \sqrt{2\color{red}{-}{\large\rho}_{5k+3}} \right) \\[2mm] {\Large\sigma} = \sqrt{2\color{red}{+}{\large\rho}_{5n+3}}\,,\quad {\Large\rho}\cdot{\Large\sigma}=2\,,\quad \cos\left(\frac{\pi}{22}\right)=\frac{{\large\sigma}}{2}=\frac{1}{{\large\rho}}\,,\quad \cos\left(\frac{\pi}{11}\right)=\sqrt{\frac12+\frac12\,\left(1/{\large\rho}\right)} \\[2mm] $$ And here are Mathematica codes for these two options: <br/> </p> <pre> <code> n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 - b]; d = Sqrt[2 + c]; e = Sqrt[2 + d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - Sqrt[2 + x]]]]]}]; s = Sqrt[2 - x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Sin[Pi/11]]] n = 4; r = 1; e = 0; For[k = 0, n >= k, k++, {a = Sqrt[2 - e]; b = Sqrt[2 - a]; c = Sqrt[2 + b]; d = Sqrt[2 + c]; e = Sqrt[2 - d]; r = r (a b c d e)}]; s = 1; x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2]]]]; For[k = 0, (n - 1) >= k, k++, {x = Sqrt[2 + Sqrt[2 + Sqrt[2 - Sqrt[2 - Sqrt[2 - x]]]]]}]; s = Sqrt[2 + x]; Print[N[r s], " ", N[1/r], " ", N[s/2], " ", N[Cos[Pi/22]]] </code> </pre> Answer 34: <p>With the help of Half angle cosine formula it is possible to express many sine and cosine angles as finite or infinite nested square roots of 2 <a href="https://math.stackexchange.com/questions/3841604/solving-cyclic-infinite-nested-square-roots-of-2-as-cosine-functions">here</a>.</p> <p>In this perspective <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be solved as follows</p> <p><span class="math-container">$2\sin\frac{\pi}{7}$</span> = <span class="math-container">$\sqrt{2-2\cos\frac{2\pi}{7}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2+2\cos\frac{4\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-2\cos\frac{3\pi}{7}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{6\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-2\cos\frac{\pi}{7}}}}$</span> = <span class="math-container">$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+2\cos\frac{2\pi}{7}}}}}$</span></p> <p>One can observe repetition of <span class="math-container">$2\cos\frac{2\pi}{7}$</span> subsequently and <span class="math-container">$\therefore$</span> <span class="math-container">$2\sin\frac{\pi}{7}$</span> can be represented simply as</p> <blockquote> <p><strong><span class="math-container">$$\sqrt{2-\sqrt{2-\sqrt{2-\sqrt{2+\sqrt{2-\sqrt{2-\sqrt{2+...2\cos\frac{2\pi}{7}}}}}}}}$$</span> with repeating pattern <span class="math-container">$--+$</span> infinitely and first <span class="math-container">$-$</span> as single nonrepeating sign</strong></p> </blockquote> <p>But getting infinite product has to be worked out</p>
Canceling out integral
2,085,629
8
https://math.stackexchange.com/questions/2085629/canceling-out-integral
<p>Bear with my naivety, I wanted to ask if it is possible to cancel out $\int$ with a $\frac{\mathrm d}{\mathrm dx}$. I had $\frac{\partial}{\partial v}$ in a question and I took $\partial v$ to the other side and took integration on both sides. So now on left hand side only $\partial$ remains so will it cancel out with the integral?</p>
[ { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" }, { "answer_id": 2085648, "score": 0, "text": "<p>Usually when we take integration we remove <strong>d</strong> or $\\partial$</p>\n\n<p>You can say it is cancel out or it is integrated.</p>\n\n<p>Example -</p>\n\n<p>$\\frac{d}{dv} x = v^2 + 3$</p>\n\n<p>$dx = (v^2 + 3) dv$</p>\n\n<p>On integrating both sides,</p>\n\n<p>x = $\\int(v^2 + 3)dv$</p>\n" }, { "answer_id": 2085669, "score": 0, "text": "<p>I wouldn't say that $\\frac{\\mathrm d}{\\mathrm dx}$ and $\\int$ cancel each other out. Consider the following examples\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int e^t\\ \\mathrm dt=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^t+C\\right)=0$$\n$$\\frac{\\mathrm d}{\\mathrm dx}\\int_0^2 e^x\\ \\mathrm dx=\\frac{\\mathrm d}{\\mathrm dx}\\left(e^2-1\\right)=0$$\nStudy the <a href=\"https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus\" rel=\"nofollow noreferrer\">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p>\n\n<hr>\n\n<p>I'll also address what you mentioned in the comments. Although $\\frac{\\mathrm dM}{\\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href=\"https://math.stackexchange.com/q/21199/161538\">it's not a ratio</a>. It just so happens that\n$$\\mathrm dM=\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx$$\nTherefore\n$$V=\\frac{\\mathrm dM}{\\mathrm dx}$$\n$$\\int V\\ \\mathrm dx=\\int\\frac{\\mathrm dM}{\\mathrm dx}\\ \\mathrm dx=\\int\\ \\mathrm dM=M+C$$</p>\n" } ]
Question: Canceling out integral Answer 1: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 2: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 3: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 4: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 5: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 6: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 7: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 8: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 9: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 10: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 11: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 12: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 13: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 14: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 15: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 16: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 17: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 18: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 19: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 20: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 21: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 22: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 23: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 24: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 25: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 26: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 27: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 28: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 29: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 30: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 31: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 32: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p> Answer 33: <p>Usually when we take integration we remove <strong>d</strong> or $\partial$</p> <p>You can say it is cancel out or it is integrated.</p> <p>Example -</p> <p>$\frac{d}{dv} x = v^2 + 3$</p> <p>$dx = (v^2 + 3) dv$</p> <p>On integrating both sides,</p> <p>x = $\int(v^2 + 3)dv$</p> Answer 34: <p>I wouldn't say that $\frac{\mathrm d}{\mathrm dx}$ and $\int$ cancel each other out. Consider the following examples $$\frac{\mathrm d}{\mathrm dx}\int e^t\ \mathrm dt=\frac{\mathrm d}{\mathrm dx}\left(e^t+C\right)=0$$ $$\frac{\mathrm d}{\mathrm dx}\int_0^2 e^x\ \mathrm dx=\frac{\mathrm d}{\mathrm dx}\left(e^2-1\right)=0$$ Study the <a href="https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_calculus" rel="nofollow noreferrer">fundamental theorem of calculus</a> for a deeper understanding of what's going on.</p> <hr> <p>I'll also address what you mentioned in the comments. Although $\frac{\mathrm dM}{\mathrm dx}$ look and act like a ratio, in modern mathematical analysis, <a href="https://math.stackexchange.com/q/21199/161538">it's not a ratio</a>. It just so happens that $$\mathrm dM=\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx$$ Therefore $$V=\frac{\mathrm dM}{\mathrm dx}$$ $$\int V\ \mathrm dx=\int\frac{\mathrm dM}{\mathrm dx}\ \mathrm dx=\int\ \mathrm dM=M+C$$</p>
$f(xf(y)-f(x))=2f(x)+xy$
2,085,726
8
https://math.stackexchange.com/questions/2085726/fxfy-fx-2fxxy
<p>This is my problem "Find $f:\mathbb{R}\to \mathbb{R}$ s.t. $f(xf(y)-f(x))=2f(x)+xy$ ". I'm not so good at functional equation so all I did until now is subtituting $x,y$ by $1,0$ and I couldn't even calculate a value.</p> <p>One other approach is that I find 1 solution $f=-x+1$. So I let $g=f+x-1$ and try to prove $g=0$, but subtituting that into the equation make it a mess. Help me please.</p>
[ { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" }, { "answer_id": 2085809, "score": 8, "text": "<p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p>\n\n<p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$\nThen\n$$\nP(1,y):\\qquad f(f(y)-f(1))=2f(1)+y\\implies f\\text{ surjective}\n$$\nThus let $a,b\\in\\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$.\n$$\nP(a,b):\\qquad 0=f(af(b)-f(a))=ab\n$$\nTherefore either $a=0$ or $b=0$.</p>\n\n<p><strong>Case 1 :</strong> $a=0$\n$$\nP(1,1):\\qquad 0=2f(1)+1\\implies f(1)=-\\frac12\n$$\nThus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence\n$$\nP(1,f(y)+1/2):\\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2\n$$\nand with $y=f(x)+1/2$ in the above equation:\n$$\nf(f(x))=f(f(x)+1/2)-1/2=x-3/2\n$$\nBut thus $0=f(f(0))=-3/2$, contradiction.</p>\n\n<p><strong>Case 2 :</strong> $b=0$</p>\n\n<p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\\tag{1}\\label{1}$ and thus $f$ is bijective. Now\n$$\nP(x,0):\\qquad f(x-f(x))=2f(x)\\tag{2}\\label{2}\n$$\n$$\nP(f(x),0):\\qquad f(f(x)-x)=2x\\\\\\stackrel{f()}\\implies f(x)-x\\stackrel{\\eqref{1}}=f(f(f(x)-x))=f(2x)\\tag{3}\\label{3}\n$$\n$$\nP(f(x),1):\\qquad f(-x)\\stackrel{\\eqref{1}}=f(-f(f(x)))\\stackrel{P(f(x),1)}=2f(f(x))+f(x)\\stackrel{\\eqref{1}}=2x+f(x)\\tag{4}\\label{4}\n$$\nNow $f(-1)\\stackrel{\\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\\stackrel{\\eqref{1}}=-1$. Finally we take \n$$\nP(x,2):f(-x-f(x))=2f(x)+2x\n$$\nBut $f(-x-f(x))\\stackrel{\\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so\n$$\nx+f(x)=1\\implies f(x)=1-x\\qquad\\forall x\\in\\mathbb{R}\n$$\nSubstituting this back into the original equation we see that this is indeed a solution.</p>\n" }, { "answer_id": 2085768, "score": 4, "text": "<p>Fact 1: $f$ has a root. To see this, take $x \\not = 0$ and $y = \\frac{-2f(x)}{x}$. </p>\n\n<p>Fact 2: If $f(c) = 0$, then $c = \\pm\\sqrt{f(0)}$. To see this, set $x=y=c$.</p>\n\n<p>Fact 3: $f(0) = \\pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \\implies cf(0)=\\pm\\sqrt{f(0)}$ but $c = \\pm\\sqrt{f(0)}$.</p>\n\n<p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p>\n\n<p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\\not = 0$.</p>\n\n<p>Fact 6: $f(f(x)) = x$ for all $x \\in \\mathbb{R}$. This follows from setting $x=1$.</p>\n\n<p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \\in \\mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p>\n\n<p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p>\n\n<p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p>\n\n<p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p>\n" }, { "answer_id": 2085773, "score": 2, "text": "<p>Let's investigate first the affine solutions $x\\mapsto ax+b$. The equation rewrites as</p>\n\n<p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p>\n\n<p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\\mapsto -x+1$ is the only affine solution to the problem.</p>\n\n<p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\\mapsto -x+1$</p>\n\n<p>That's all I have for the moment</p>\n" }, { "answer_id": 3687369, "score": 0, "text": "<p><span class=\"math-container\">$$P(x,y) \\implies f(xf(y)−f(x))=2f(x)+xy$$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y)-f(1))=2f(1)+y \\implies \\text{Surjectivity}$$</span>\nBecause <span class=\"math-container\">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class=\"math-container\">$a$</span> such that <span class=\"math-container\">$f(a)=0$</span>\n<span class=\"math-container\">$$P(a,a) \\implies a=\\pm \\sqrt{f(0)}$$</span>\n<span class=\"math-container\">$$f(xf(1)-f(x))=2f(x)+x$$</span>\nand there exists a real number <span class=\"math-container\">$c$</span> such that <span class=\"math-container\">$f(c)=1$</span>\n<span class=\"math-container\">$$P(x,c) \\implies f(x-f(x))=2f(x)+xc \\tag{1}$$</span>\nLet <span class=\"math-container\">$b \\ne a \\implies f(b) \\ne 0$</span> be a real number,\n<span class=\"math-container\">$$P(x/f(b),b) \\implies f(x-f(x))=2f(x)+xb/f(b) \\tag{2}$$</span>\nCombining <span class=\"math-container\">$(1)$</span> with <span class=\"math-container\">$(2)$</span>, we get\n<span class=\"math-container\">$$f(b)=b/c, \\ \\ \\ \\forall \\ b \\ne a, \\ c \\ne 0$$</span>\nSubstituting that in the original equation, we get\n<span class=\"math-container\">$$xy/c^2-x/c^2=2x/c+xy$$</span>\nwhich implies that no such <span class=\"math-container\">$c \\ne 0$</span> exists \n<span class=\"math-container\">$$\\implies c=0 \\implies f(0)=1 \\implies a= \\pm 1$$</span>\nCase 1: <span class=\"math-container\">$a=1 \\ \\ (f(1)=0)$</span>\n<span class=\"math-container\">$$P(1,y) \\implies f(f(y))=y \\tag{3}$$</span>\n<span class=\"math-container\">$$P(x,1) \\implies f(-f(x))=2f(x)+x $$</span>\n<span class=\"math-container\">$$\\overset{(3)}{\\implies} f(-x)=2x+f(x) \\tag{4}$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(-1)=2$$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x)-2)=4-x \\overset{(3)}{\\implies} f(-x-2)=4-f(x) \\overset{(4)}{\\implies} f(x-2)=4-2x-f(x)$$</span>\nSo, by strong induction, we have for all integers <span class=\"math-container\">$k$</span>, \n<span class=\"math-container\">$$f(k)=1-k$$</span>\nand to prove it we put <span class=\"math-container\">$x=k+1$</span> in <span class=\"math-container\">$f(x-2)=4-2x-f(x)$</span> getting\n<span class=\"math-container\">$$f(k-1)=2-2k-f(k+1) \\iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span>\nSo, it's proven. Now, let <span class=\"math-container\">$k$</span> be a positive integer,\n<span class=\"math-container\">$$P(x,k) \\implies f(x-xk-f(x))=2f(x)+xk$$</span>\n<span class=\"math-container\">$$P(x,0) \\implies f(x-f(x))=2f(x)$$</span>\nSo, we combine the above two equations to get\n<span class=\"math-container\">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span>\nDue to the surjectivity, <span class=\"math-container\">$x-f(x)$</span> can attain any real value, and so does <span class=\"math-container\">$xk$</span>, so we set <span class=\"math-container\">$x-f(x)=u$</span> and <span class=\"math-container\">$-xk=v$</span> getting\n<span class=\"math-container\">$$f(u+v)=f(u)-v \\tag{5}$$</span>\n<span class=\"math-container\">$$\\implies f(xf(y)-f(x))=f(xf(y))+f(x) \\overset{(3), P(x,y)}{\\implies} f(xf(y))=f(x)+xy$$</span>\n<span class=\"math-container\">$$ \\overset{(3), y=f(y)}{\\implies} f(xy)=f(x)+xf(y)$$</span>\n<span class=\"math-container\">$$\\overset{x \\mapsto x+1}{\\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\\overset{(5)}{\\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span>\n<span class=\"math-container\">$$\\overset{x=1}{\\implies} f(y)-y=f(1)-1+f(y)+f(y) \\iff f(y)=1-y \\ \\ \\ \\Box.$$</span>\nCase 2: <span class=\"math-container\">$a=-1$</span>\n<span class=\"math-container\">$$P(-1,x) \\implies f(-f(x))=-x$$</span>\n<span class=\"math-container\">$$P(x,-1) \\implies f(-f(x))=2f(x)-x$$</span>\nSo <span class=\"math-container\">$f(x)=0$</span> which doesn't work in the original equation.</p>\n" } ]
Question: $f(xf(y)-f(x))=2f(x)+xy$ Answer 1: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 2: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 3: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 4: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 5: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 6: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 7: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 8: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 9: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 10: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 11: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 12: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 13: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 14: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 15: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 16: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 17: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 18: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 19: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 20: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 21: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 22: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 23: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 24: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 25: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 26: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 27: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 28: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 29: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 30: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 31: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 32: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 33: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 34: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 35: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 36: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 37: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 38: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 39: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 40: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 41: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 42: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 43: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 44: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 45: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 46: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 47: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 48: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 49: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 50: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 51: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 52: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 53: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 54: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 55: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 56: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 57: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 58: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 59: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 60: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 61: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 62: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 63: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 64: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p> Answer 65: <p>This is a really tough problem, I like it very much! I tried to make it clear where I used what, but nevertheless its a bit messy. Feel free to ask if something is unclear!</p> <p>Let $f$ be a solution of the equation and let $P(x,y)$ e the assertion $f(xf(y)-f(x))=2f(x)+xy$ Then $$ P(1,y):\qquad f(f(y)-f(1))=2f(1)+y\implies f\text{ surjective} $$ Thus let $a,b\in\mathbb{R}$ be such that $f(a)=0$ and $f(b)=1$. $$ P(a,b):\qquad 0=f(af(b)-f(a))=ab $$ Therefore either $a=0$ or $b=0$.</p> <p><strong>Case 1 :</strong> $a=0$ $$ P(1,1):\qquad 0=2f(1)+1\implies f(1)=-\frac12 $$ Thus if we reconsider $P(1,y)$ we obtain $f(f(y)+1/2)=y-1$. Hence $$ P(1,f(y)+1/2):\qquad f(y-1/2)=f(f(f(y)+1/2)+1/2)=f(y)+1/2-1=f(y)-1/2 $$ and with $y=f(x)+1/2$ in the above equation: $$ f(f(x))=f(f(x)+1/2)-1/2=x-3/2 $$ But thus $0=f(f(0))=-3/2$, contradiction.</p> <p><strong>Case 2 :</strong> $b=0$</p> <p>Then again $P(1,1)$ gives $f(0)=1$. Therefore if we reconsider $P(1,y)$ we get $f(f(y))=y\tag{1}\label{1}$ and thus $f$ is bijective. Now $$ P(x,0):\qquad f(x-f(x))=2f(x)\tag{2}\label{2} $$ $$ P(f(x),0):\qquad f(f(x)-x)=2x\\\stackrel{f()}\implies f(x)-x\stackrel{\eqref{1}}=f(f(f(x)-x))=f(2x)\tag{3}\label{3} $$ $$ P(f(x),1):\qquad f(-x)\stackrel{\eqref{1}}=f(-f(f(x)))\stackrel{P(f(x),1)}=2f(f(x))+f(x)\stackrel{\eqref{1}}=2x+f(x)\tag{4}\label{4} $$ Now $f(-1)\stackrel{\eqref{4}}=2+f(1)=2$ and thus $f(2)=f(f(-1))\stackrel{\eqref{1}}=-1$. Finally we take $$ P(x,2):f(-x-f(x))=2f(x)+2x $$ But $f(-x-f(x))\stackrel{\eqref{4}}=2(x+f(x))+f(x+f(x))$ and thus the above equation simplifies to $f(x+f(x))=0$. But $f$ is injective and we know already that $f(1)=0$, so $$ x+f(x)=1\implies f(x)=1-x\qquad\forall x\in\mathbb{R} $$ Substituting this back into the original equation we see that this is indeed a solution.</p> Answer 66: <p>Fact 1: $f$ has a root. To see this, take $x \not = 0$ and $y = \frac{-2f(x)}{x}$. </p> <p>Fact 2: If $f(c) = 0$, then $c = \pm\sqrt{f(0)}$. To see this, set $x=y=c$.</p> <p>Fact 3: $f(0) = \pm 1$. To see this, let $c$ be a root and set $x=c, y=0$. Then, $f(cf(0)) = 0 \implies cf(0)=\pm\sqrt{f(0)}$ but $c = \pm\sqrt{f(0)}$.</p> <p>Fact 4: $f(0) = 1$. If $f(0) = -1$, then $x=y=0$ gives $f(1) = -2$. Then $x=1,y=0$ gives $f(-1-(-2))=2(-2) = -4$, a contradiction.</p> <p>Fact 5: $f(-1) = 2$ and $f(1) = 0$. The first equality follows from $x=0,y=0$. The second then comes from Fact 1, Fact 2, and $f(-1)\not = 0$.</p> <p>Fact 6: $f(f(x)) = x$ for all $x \in \mathbb{R}$. This follows from setting $x=1$.</p> <p>Fact 7: $f(-a) = 2a+f(a)$ for all $a \in \mathbb{R}$. This follows from $x=f(a),y=1$ and noting Fact 6 implies $f$ is surjective.</p> <p>Fact 8: $f(-x-f(x)) = 2x+2f(x)$. This follows from $y=2$.</p> <p>Fact 9: $f(-x-f(x)) = 2x+2f(x)+f(x+f(x))$. This follows from letting $a = x+f(x)$ in Fact 7.</p> <p>Fact 10: $f(x) = -x+1$. This follows from Facts 8,9 , Fact 6 showing $f$ is injective, and Fact 5, which gives $f(1) = 0$.</p> Answer 67: <p>Let's investigate first the affine solutions $x\mapsto ax+b$. The equation rewrites as</p> <p>$$a(axy+bx-ax-b)+b=2ax+2b+xy\iff (a^2-1)xy+a(b-a-2)x-b(a+1)=0$$</p> <p>Identifying the coefficients of the terms of the two sides of this polynomial equations we get that $x\mapsto -x+1$ is the only affine solution to the problem.</p> <p>More painful but with the same method we prove that the only polynomial (power series solution) is $x\mapsto -x+1$</p> <p>That's all I have for the moment</p> Answer 68: <p><span class="math-container">$$P(x,y) \implies f(xf(y)−f(x))=2f(x)+xy$$</span> <span class="math-container">$$P(1,y) \implies f(f(y)-f(1))=2f(1)+y \implies \text{Surjectivity}$$</span> Because <span class="math-container">$2f(1)+y$</span> can attain any real value. Then there exists a real number <span class="math-container">$a$</span> such that <span class="math-container">$f(a)=0$</span> <span class="math-container">$$P(a,a) \implies a=\pm \sqrt{f(0)}$$</span> <span class="math-container">$$f(xf(1)-f(x))=2f(x)+x$$</span> and there exists a real number <span class="math-container">$c$</span> such that <span class="math-container">$f(c)=1$</span> <span class="math-container">$$P(x,c) \implies f(x-f(x))=2f(x)+xc \tag{1}$$</span> Let <span class="math-container">$b \ne a \implies f(b) \ne 0$</span> be a real number, <span class="math-container">$$P(x/f(b),b) \implies f(x-f(x))=2f(x)+xb/f(b) \tag{2}$$</span> Combining <span class="math-container">$(1)$</span> with <span class="math-container">$(2)$</span>, we get <span class="math-container">$$f(b)=b/c, \ \ \ \forall \ b \ne a, \ c \ne 0$$</span> Substituting that in the original equation, we get <span class="math-container">$$xy/c^2-x/c^2=2x/c+xy$$</span> which implies that no such <span class="math-container">$c \ne 0$</span> exists <span class="math-container">$$\implies c=0 \implies f(0)=1 \implies a= \pm 1$$</span> Case 1: <span class="math-container">$a=1 \ \ (f(1)=0)$</span> <span class="math-container">$$P(1,y) \implies f(f(y))=y \tag{3}$$</span> <span class="math-container">$$P(x,1) \implies f(-f(x))=2f(x)+x $$</span> <span class="math-container">$$\overset{(3)}{\implies} f(-x)=2x+f(x) \tag{4}$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(-1)=2$$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x)-2)=4-x \overset{(3)}{\implies} f(-x-2)=4-f(x) \overset{(4)}{\implies} f(x-2)=4-2x-f(x)$$</span> So, by strong induction, we have for all integers <span class="math-container">$k$</span>, <span class="math-container">$$f(k)=1-k$$</span> and to prove it we put <span class="math-container">$x=k+1$</span> in <span class="math-container">$f(x-2)=4-2x-f(x)$</span> getting <span class="math-container">$$f(k-1)=2-2k-f(k+1) \iff f(k+1)=2-f(k-1)-2k=2-(1-(k-1))-2k=-k$$</span> So, it's proven. Now, let <span class="math-container">$k$</span> be a positive integer, <span class="math-container">$$P(x,k) \implies f(x-xk-f(x))=2f(x)+xk$$</span> <span class="math-container">$$P(x,0) \implies f(x-f(x))=2f(x)$$</span> So, we combine the above two equations to get <span class="math-container">$$f((x-f(x))-xk)=f(x-f(x))+xk$$</span> Due to the surjectivity, <span class="math-container">$x-f(x)$</span> can attain any real value, and so does <span class="math-container">$xk$</span>, so we set <span class="math-container">$x-f(x)=u$</span> and <span class="math-container">$-xk=v$</span> getting <span class="math-container">$$f(u+v)=f(u)-v \tag{5}$$</span> <span class="math-container">$$\implies f(xf(y)-f(x))=f(xf(y))+f(x) \overset{(3), P(x,y)}{\implies} f(xf(y))=f(x)+xy$$</span> <span class="math-container">$$ \overset{(3), y=f(y)}{\implies} f(xy)=f(x)+xf(y)$$</span> <span class="math-container">$$\overset{x \mapsto x+1}{\implies}f(xy+y)=f(x+1)+xf(y)+f(y)\overset{(5)}{\implies} f(xy)-y=f(x)-1+xf(y)+f(y)$$</span> <span class="math-container">$$\overset{x=1}{\implies} f(y)-y=f(1)-1+f(y)+f(y) \iff f(y)=1-y \ \ \ \Box.$$</span> Case 2: <span class="math-container">$a=-1$</span> <span class="math-container">$$P(-1,x) \implies f(-f(x))=-x$$</span> <span class="math-container">$$P(x,-1) \implies f(-f(x))=2f(x)-x$$</span> So <span class="math-container">$f(x)=0$</span> which doesn't work in the original equation.</p>
The matrix of a projection can never be invertible
2,086,814
8
https://math.stackexchange.com/questions/2086814/the-matrix-of-a-projection-can-never-be-invertible
<p>I am currently studying linear transformations in order to refresh my knowledge of linear algebra. One statement in my textbook (by David Poole) is:</p> <blockquote> <p>When considering linear transformations from $\mathbb{R}^2$ to $\mathbb{R}^2$, the matrix of a projection can never be invertible.</p> </blockquote> <p>I know that a projection matrix satisfies the equation $P^2 = P$. Taking determinant of both sides gives </p> <p>$$\text{det}(P)^2 = \text{det}(P)$$ </p> <p>which is always true when $P$ is singular. However take $\color{blue} {P = I_2}$, then the equality is true and the projection matrix is invertible. What mistake do I make in my reasoning?</p>
[ { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" }, { "answer_id": 2086819, "score": 15, "text": "<p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p>\n" }, { "answer_id": 2086821, "score": 4, "text": "<p>Technically this is not true. The only invertible projection is the identity. Proof:</p>\n\n<p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p>\n" }, { "answer_id": 2086820, "score": -1, "text": "<p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>\n" } ]
Question: The matrix of a projection can never be invertible Answer 1: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 2: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 3: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 4: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 5: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 6: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 7: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 8: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 9: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 10: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 11: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 12: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 13: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 14: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 15: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 16: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 17: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 18: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 19: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 20: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 21: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 22: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 23: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 24: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 25: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 26: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 27: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 28: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 29: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 30: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 31: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 32: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 33: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 34: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 35: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 36: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 37: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 38: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 39: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 40: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 41: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 42: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 43: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 44: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 45: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 46: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 47: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 48: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p> Answer 49: <p>Well, the statement is plainly false when $P=I$. However, the only invertible projection matrix is the identity. To see this, notice that $P^2x=Px$ for all $x$. So if $P$ were invertible, we get $Px=x$ for all $x$, and since the identity is unique, we get $P=I$. </p> Answer 50: <p>Technically this is not true. The only invertible projection is the identity. Proof:</p> <p>If $P$ is invertible, then $P^{-1}(P^2)=P^{-1}P$ so $P=I$.</p> Answer 51: <p>This isn't entirely true, you can have the identity, but assume it's not the identity, then the minimal polynomial divides $x^2-x$ and is not just $x-1$, hence $x$ divides the minimal polynomial. But then as $0$ is a root of the polynomial, it is an eigenvalue for the matrix, $P$, hence $P$ cannot be invertible as its determinant is the product of its eigenvalues.</p>
Is the binomial theorem actually more efficient than just distributing
2,087,099
8
https://math.stackexchange.com/questions/2087099/is-the-binomial-theorem-actually-more-efficient-than-just-distributing
<p>Is the binomial theorem actually more efficient than just distributing a given binomial. I believe it is more confusing for me to remember and work out using the binomial theorem as a guide, than to just distribute when given a problem like: $$(x-4)^6$$ I think distributing that would be equally as painful as using the binomial theorem. Am I alone or is there actually a reason to practice the painful procedures?</p>
[ { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" }, { "answer_id": 2087103, "score": 37, "text": "<p>The binomial theorem\nallows you to\nwrite out the expansion\nof your polynomial immediately.</p>\n\n<p>It also allows you\nto answer such questions as\n\"What is the coefficient\nof $x^{20}$ in\n$(1+x)^{100}$?\"</p>\n\n<p>Its generalisation to\nnon-integer exponents\nallows you to\nget the expansion of\n$(1-x)^{-1/2}$.</p>\n\n<p>It is a good thing.</p>\n" }, { "answer_id": 2087105, "score": 14, "text": "<p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\\binom{n}{k} = \\dfrac{n^{\\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p>\n\n<p>$$e^x = \\lim_{n\\to\\infty}\\left(1+\\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty}\\left(1 + n\\frac{x}{n} + \\frac{n^\\underline{2}}{2!}\\frac{x^2}{n^2} + \\frac{n^\\underline{3}}{3!}\\frac{x^3}{n^3} + \\cdots + \\frac{x^n}{n^n}\\right)$$</p>\n\n<p>As $n$ gets very large, for fixed $k$, $n^\\underline{k}$ approaches $n^k$, and $\\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p>\n\n<p>$$e^x = 1 + x + \\frac{x^2}{2} + \\frac{x^3}{6} + \\frac{x^4}{24} + \\cdots$$</p>\n" }, { "answer_id": 2087134, "score": 11, "text": "<p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p>\n\n<p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p>\n\n<p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p>\n\n<p>$$1$$\n$$1\\qquad 1$$\n$$1\\qquad 2\\qquad 1$$\n$$1\\qquad 3\\qquad 3\\qquad 1$$\n$$1\\qquad 4\\qquad 6\\qquad 4\\qquad 1$$\n$$1\\qquad 5\\qquad 10\\qquad 10\\qquad 5\\qquad 1$$\n$$1\\qquad 6\\qquad 15\\qquad 20\\qquad 15\\qquad 6\\qquad 1$$</p>\n\n<p>The last row gives the coefficients of $x^k\\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p>\n\n<p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p>\n" }, { "answer_id": 2087132, "score": 10, "text": "<p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p>\n\n<p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p>\n\n<p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href=\"https://math.stackexchange.com/q/2010206/25805\">very imaginative cases</a>!</p>\n" }, { "answer_id": 2087847, "score": 8, "text": "<p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p>\n\n<p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>\n" } ]
Question: Is the binomial theorem actually more efficient than just distributing Answer 1: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 2: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 3: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 4: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 5: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 6: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 7: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 8: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 9: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 10: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 11: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 12: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 13: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 14: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 15: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 16: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 17: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 18: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 19: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 20: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 21: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 22: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 23: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 24: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 25: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 26: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 27: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 28: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 29: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 30: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 31: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 32: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 33: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 34: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 35: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 36: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 37: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 38: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 39: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 40: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 41: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 42: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 43: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 44: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 45: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 46: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 47: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 48: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 49: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 50: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 51: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 52: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 53: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 54: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 55: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 56: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 57: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 58: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 59: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 60: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 61: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 62: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 63: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 64: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 65: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 66: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 67: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 68: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 69: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 70: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 71: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 72: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 73: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 74: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 75: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 76: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 77: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 78: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 79: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 80: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p> Answer 81: <p>The binomial theorem allows you to write out the expansion of your polynomial immediately.</p> <p>It also allows you to answer such questions as "What is the coefficient of $x^{20}$ in $(1+x)^{100}$?"</p> <p>Its generalisation to non-integer exponents allows you to get the expansion of $(1-x)^{-1/2}$.</p> <p>It is a good thing.</p> Answer 82: <p>There are all sorts of reasons. First of all, you'll learn when you've moved forward a bit that the binomial theorem can actually be applied in cases of non-integer exponents, by defining $\binom{n}{k} = \dfrac{n^{\underline{k}}}{k!}$, where the underline represents a falling power (i.e., $n(n-1)(n-2)\cdots$). Another that comes to mind immediately is the series expansion of $e$...</p> <p>$$e^x = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n = \lim_{n\to\infty}\left(1 + n\frac{x}{n} + \frac{n^\underline{2}}{2!}\frac{x^2}{n^2} + \frac{n^\underline{3}}{3!}\frac{x^3}{n^3} + \cdots + \frac{x^n}{n^n}\right)$$</p> <p>As $n$ gets very large, for fixed $k$, $n^\underline{k}$ approaches $n^k$, and $\frac{x^n}{n^n}$ vanishes since $x$ is fixed, so you end up with the familiar series...</p> <p>$$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \cdots$$</p> Answer 83: <p>If you're not seeing the utility of the binomial theorem, then I think you're missing an important observation:</p> <p>The coefficients of every term can be calculated extremely efficiently (your example of $(x-4)^6$ is easy to expand <em>by hand</em> with the binomial theorem, without, it's doable but very tedious).</p> <p>A useful shorthand for lower powers of the exponent is to remember the first few rows of <em>Pascal's Triangle</em> along with the rule for adding more if you need them. Example of first few rows:</p> <p>$$1$$ $$1\qquad 1$$ $$1\qquad 2\qquad 1$$ $$1\qquad 3\qquad 3\qquad 1$$ $$1\qquad 4\qquad 6\qquad 4\qquad 1$$ $$1\qquad 5\qquad 10\qquad 10\qquad 5\qquad 1$$ $$1\qquad 6\qquad 15\qquad 20\qquad 15\qquad 6\qquad 1$$</p> <p>The last row gives the coefficients of $x^k\cdot(-4)^{(6-k)}$ for $k$ running from $0$ to $6$. Computing $(-4)^{(6-k)}$ is easy enough by hand, and multiplying it by the relevant entry in the last row above is also fairly easy (give it a try).</p> <p>One more thing: The binomial theorem admits an interesting generalization to powers of general polynomials of a given length sometimes called the multinomial theorem. (Challenge: try investigating the coefficients of the first few powers of $(x+y+z)^n$ and perhaps you can figure out the pattern for trinomials? How does it relate to the pattern for binomials? Can you generalize it to general polynomials?)</p> Answer 84: <p>Marty Cohen and Phicar answered your question correctly. In mathematics, as we always use small numbers, It could be feasible to think this. Why learn the binomial theorem to compute $(a+b)^2$? But what can you say about $(3a+5b)^{987658}$? The binomial theorem is way more efficient in this case and the actual number of cases in which it wouldn't be efficient is very small.</p> <p>The binomial theorem gives you a powerful tool: The computation for any exponent in - perhaps - the fastest way possible in which it is only not effective for a finite amount of exponents, but its the best possible for an endless amount of exponents. </p> <p>This is one trap in mathematics: We implicitly assume that the mathematical tools are created to treat familiar cases, but they are created to treat <strong>all</strong> cases in the neatest way possible, they will be cumbersome for some simple cases (Example: Integrating $f(x)=mx$ in the interval $[0,a]$. This is just a triangle, there is no need for integral calculus here). I remember of me asking about why we need analysis if calculus seems to work well, but I was supposing (without realizing) that <em>calculus</em> is used only on familiar cases, but we want to be useful also for <a href="https://math.stackexchange.com/q/2010206/25805">very imaginative cases</a>!</p> Answer 85: <p>To add to other answers, in higher-level math it's important to be able to <em>generalize</em> the behavior of $(a + b)^n$ when $n$ is a variable. The work that you're doing now is mostly just training exercises for being familiar with, and having intuition for, the general binomial theorem.</p> <p>For example, there's a really key moment early on in calculus when it's critical to understand that, for all $n$: $(x + h)^n = x^n + nx^{n-1}h....$ plus a bunch of other stuff with higher powers of $h$ in it, but whose exact numerical coefficients we can ignore. </p>
If $f(x+1)+f(x-1)= \sqrt{2}\cdot f(x)$, find the period of $f(x)$.
2,088,809
8
https://math.stackexchange.com/questions/2088809/if-fx1fx-1-sqrt2-cdot-fx-find-the-period-of-fx
<p>If $f(x+1)+f(x-1)= \sqrt{2}\cdot f(x)$, then the period of $f(x)$ is?</p> <p>I tried replacing $x$ with $x-1$ and stuff, but didn't lead me to anything...</p>
[ { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" }, { "answer_id": 2088819, "score": 4, "text": "<p>Make $x=x-1$</p>\n\n<p>$$f(x)+f(x-2)=\\sqrt{2}f(x-1) \\tag {1}$$</p>\n\n<p>Make $x=x+1$</p>\n\n<p>$$f(x+2)+f(x)=\\sqrt{2}f(x+1) \\tag {2}$$</p>\n\n<p>Now sum $(1)$ and $(2)$:</p>\n\n<p>$$2f(x)+f(x+2)+f(x-2)=\\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p>\n\n<p>$$f(x+2)+f(x-2)=0$$</p>\n\n<p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p>\n\n<p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p>\n" }, { "answer_id": 2088826, "score": 2, "text": "<p>Note that $\\sqrt{2} = \\frac{2}{\\sqrt{2}}$. Hence,</p>\n\n<p>$$f(x + 1) + f(x - 1) = \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}f(x)$$\n$$\\begin{align}f(x + 1) &amp;= \\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(f(x) - \\sqrt{2}f(x - 1)\\right)\\\\\n&amp;=\\frac{1}{\\sqrt{2}}f(x) + \\frac{1}{\\sqrt{2}}\\left(-f(x - 2)\\right)\\\\\n&amp;= \\frac{1}{\\sqrt{2}}\\left(f(x) - f(x - 2)\\right)\\end{align}$$</p>\n\n<p>$$\\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$\n$$\\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p>\n\n<p>Hence,</p>\n\n<p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$\n$$f(x + 1) = -f(x - 3)$$\n$$f(x) = -f(x - 4)$$</p>\n\n<p>Therefore,</p>\n\n<p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p>\n\n<p>thus the period is $8$.</p>\n" }, { "answer_id": 2088857, "score": 1, "text": "<p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p>\n\n<p>$$r+\\frac1r=\\sqrt 2.$$</p>\n\n<p>The roots are $r=\\dfrac{1\\pm i}{\\sqrt2}$, and the general solution is</p>\n\n<p>$$f(n)=C\\left(\\dfrac{1+i}{\\sqrt2}\\right)^n+C^*\\left(\\dfrac{1-i}{\\sqrt2}\\right)^n=Ce^{in\\pi/4}+C^*e^{-in\\pi/4}=A\\cos\\frac{n\\pi}4+B\\sin\\frac{n\\pi}4.$$</p>\n\n<p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\\le x &lt; 2$.</p>\n\n<p>This proves that the period is $8$.</p>\n" } ]
Question: If $f(x+1)+f(x-1)= \sqrt{2}\cdot f(x)$, find the period of $f(x)$. Answer 1: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 2: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 3: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 4: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 5: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 6: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 7: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 8: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 9: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 10: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 11: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 12: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 13: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 14: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 15: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 16: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 17: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 18: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 19: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 20: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 21: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 22: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 23: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 24: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 25: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 26: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 27: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 28: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 29: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 30: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 31: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 32: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 33: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 34: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 35: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 36: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 37: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 38: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 39: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 40: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 41: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 42: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 43: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 44: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 45: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 46: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 47: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 48: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p> Answer 49: <p>Make $x=x-1$</p> <p>$$f(x)+f(x-2)=\sqrt{2}f(x-1) \tag {1}$$</p> <p>Make $x=x+1$</p> <p>$$f(x+2)+f(x)=\sqrt{2}f(x+1) \tag {2}$$</p> <p>Now sum $(1)$ and $(2)$:</p> <p>$$2f(x)+f(x+2)+f(x-2)=\sqrt{2}(f(x+1)+f(x-1))=2f(x) $$</p> <p>$$f(x+2)+f(x-2)=0$$</p> <p>Now make $x=x+2$ then $f(x+4)=-f(x)$. </p> <p>Make $x+4$ and get $f(x+8)=-f(x+4)=f(x)$, then the period is $8$.</p> Answer 50: <p>Note that $\sqrt{2} = \frac{2}{\sqrt{2}}$. Hence,</p> <p>$$f(x + 1) + f(x - 1) = \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}f(x)$$ $$\begin{align}f(x + 1) &amp;= \frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(f(x) - \sqrt{2}f(x - 1)\right)\\ &amp;=\frac{1}{\sqrt{2}}f(x) + \frac{1}{\sqrt{2}}\left(-f(x - 2)\right)\\ &amp;= \frac{1}{\sqrt{2}}\left(f(x) - f(x - 2)\right)\end{align}$$</p> <p>$$\sqrt{2}f(x + 1) = f(x) - f(x - 2)$$ $$\sqrt{2}f(x) = f(x - 1) - f(x - 3)$$</p> <p>Hence,</p> <p>$$f(x - 1) - f(x - 3) = f(x + 1) + f(x - 1)$$ $$f(x + 1) = -f(x - 3)$$ $$f(x) = -f(x - 4)$$</p> <p>Therefore,</p> <p>$$f(x) = -f(x - 4) = -(-f(x - 8)) = f(x - 8)$$</p> <p>thus the period is $8$.</p> Answer 51: <p>This is an ordinary linear recurrence, which you can solve using the characteristic equation</p> <p>$$r+\frac1r=\sqrt 2.$$</p> <p>The roots are $r=\dfrac{1\pm i}{\sqrt2}$, and the general solution is</p> <p>$$f(n)=C\left(\dfrac{1+i}{\sqrt2}\right)^n+C^*\left(\dfrac{1-i}{\sqrt2}\right)^n=Ce^{in\pi/4}+C^*e^{-in\pi/4}=A\cos\frac{n\pi}4+B\sin\frac{n\pi}4.$$</p> <p>The constants $C$ or $A,B$ can be obtained from some given conditions such as $f(0)$ and $f(1)$. This will only solve for integer arguments. For real arguments, you can specify the initial conditions by assigning $f(x)$ arbitrary values for $0\le x &lt; 2$.</p> <p>This proves that the period is $8$.</p>
Riccati Comparison Principle
2,089,782
8
https://math.stackexchange.com/questions/2089782/riccati-comparison-principle
<p>This is a proposition in a book I'm reading whose proof I am unsure of how to fill in the details</p> <p><strong>Proposition</strong></p> <p>Suppose we have two smooth functions <span class="math-container">$\rho_{1,2} : (0,b) \to \mathbb{R}$</span> such that <span class="math-container">$$\rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 $$</span> then <span class="math-container">$$\rho_2 - \rho_1 \geq \limsup_{t \to 0}\,(\, \rho_2(t) - \rho_1(t)\,)$$</span></p> <p><strong>Question</strong></p> <p>The author says that this follows from the easily verified fact that the function <span class="math-container">$(\rho_2 - \rho_1) e^F$</span> is increasing where <span class="math-container">$F$</span> is the antiderivative of <span class="math-container">$\rho_2 + \rho_1$</span> on <span class="math-container">$(0, b)$</span>. This would be true if <span class="math-container">$\rho_2 - \rho_1$</span> is increasing but I don't think that this is the case. How does the lim sup inequality follow from the fact that <span class="math-container">$(\rho_2 - \rho_1) e^F$</span> is increasing?</p>
[ { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" }, { "answer_id": 3086715, "score": 6, "text": "<p>The book is wrong and it's easy to find counterexamples. For example if <span class=\"math-container\">$ρ_1(x)=1/(x+2)$</span> and <span class=\"math-container\">$ρ_2(x)=1/(x+1)$</span> then <span class=\"math-container\">$ρ_i'+ρ_i^2=0$</span> and <span class=\"math-container\">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class=\"math-container\">$ρ_2(1)-ρ_1(1)=1/6.$</span></p>\n\n<p>The author may have been thinking of something like the following argument which I will paraphrase from \"<a href=\"https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf\" rel=\"nofollow noreferrer\">New comparison theorems in Riemannian geometry</a>\" by Y. Han and coauthors, Theorem 5.1.\nSuppose <span class=\"math-container\">$ρ_1$</span> and <span class=\"math-container\">$ρ_2$</span> are both <span class=\"math-container\">$1/t+O(1).$</span>\nDefine <span class=\"math-container\">$ϕ_i(t)=t\\exp(\\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class=\"math-container\">$ϕ_i$</span> is a <span class=\"math-container\">$C^1$</span> function on <span class=\"math-container\">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class=\"math-container\">$1$</span>), smooth and strictly positive on <span class=\"math-container\">$(0,b),$</span> with <span class=\"math-container\">$ϕ_i(0)=0.$</span> Differentiating gives <span class=\"math-container\">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class=\"math-container\">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to\n<span class=\"math-container\">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\\geq 0.$$</span>\nUsing the fact that <span class=\"math-container\">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class=\"math-container\">$ϕ_2'ϕ_1\\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class=\"math-container\">$ϕ_i',$</span> this means <span class=\"math-container\">$ρ_2(t)\\geq ρ_1(t).$</span></p>\n" }, { "answer_id": 2089955, "score": 3, "text": "<p>Note that\n$$\\begin{align}\\frac d{dt}\\left[(\\rho_2-\\rho_1)e^F\\right]&amp;=(\\rho'_2-\\rho'_1)e^F+(\\rho_2-\\rho_1)(\\rho_2+\\rho_1)e^F\\\\\n&amp;=e^F\\left(\\rho'_2-\\rho'_1+\\rho_2^2-\\rho_1^2\\right)\n\\end{align}$$\nand since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\\rho_2-\\rho_1)e^F$ is positive. So the function $(\\rho_2-\\rho_1)e^F$ is increasing.</p>\n\n<p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this:\n$$\\begin{align}\n\\rho_1'+ \\rho_1^2 \\leq \\rho_2'+ \\rho_2^2 &amp;\\implies\\rho_1'-\\rho_2' \\leq \\rho_2^2- \\rho_1^2\\\\\n&amp;\\implies-\\dfrac{\\rho_1'-\\rho_2'}{\\rho_1-\\rho_2}\\ge\\rho_1+\\rho_2\\\\\n&amp;\\implies-\\ln|\\rho_1-\\rho_2|\\ge F\\\\\n&amp;\\implies\\frac 1{|\\rho_1-\\rho_2|}\\le e^F\\implies(\\rho_2-\\rho_1)e^F\\ge 1\n\\end{align}$$\nor something like that....</p>\n" } ]
Question: Riccati Comparison Principle Answer 1: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 2: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 3: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 4: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 5: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 6: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 7: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 8: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 9: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 10: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 11: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 12: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 13: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 14: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 15: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 16: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 17: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 18: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 19: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 20: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 21: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 22: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 23: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 24: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 25: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 26: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 27: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 28: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 29: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 30: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 31: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 32: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p> Answer 33: <p>The book is wrong and it's easy to find counterexamples. For example if <span class="math-container">$ρ_1(x)=1/(x+2)$</span> and <span class="math-container">$ρ_2(x)=1/(x+1)$</span> then <span class="math-container">$ρ_i'+ρ_i^2=0$</span> and <span class="math-container">$ρ_2(0)-ρ_1(0)=1/2$</span> but <span class="math-container">$ρ_2(1)-ρ_1(1)=1/6.$</span></p> <p>The author may have been thinking of something like the following argument which I will paraphrase from "<a href="https://web.math.sinica.edu.tw/bulletin_ns/20142/2014202.pdf" rel="nofollow noreferrer">New comparison theorems in Riemannian geometry</a>" by Y. Han and coauthors, Theorem 5.1. Suppose <span class="math-container">$ρ_1$</span> and <span class="math-container">$ρ_2$</span> are both <span class="math-container">$1/t+O(1).$</span> Define <span class="math-container">$ϕ_i(t)=t\exp(\int_0^t (ρ_i(s)-1/s)ds).$</span> Then each <span class="math-container">$ϕ_i$</span> is a <span class="math-container">$C^1$</span> function on <span class="math-container">$[0,b)$</span> (i.e. the right-sided derivative exists at zero - in fact it's <span class="math-container">$1$</span>), smooth and strictly positive on <span class="math-container">$(0,b),$</span> with <span class="math-container">$ϕ_i(0)=0.$</span> Differentiating gives <span class="math-container">$ϕ'_i(t)=ρ_i(t) ϕ_i(t)$</span> and hence <span class="math-container">$ϕ''_i(t)=(ρ'_i(t)+ρ_i(t)^2)ϕ_i(t).$</span> This leads to <span class="math-container">$$(ϕ_2'ϕ_1-ϕ_1'ϕ_2)'=ϕ_2''ϕ_1-ϕ_1''ϕ_2=((ρ'_2+ρ_2^2)-(ρ'_1+ρ_1^2))ϕ_1ϕ_2\geq 0.$$</span> Using the fact that <span class="math-container">$ϕ_2'(0)ϕ_1(0)-ϕ_1'(0)ϕ_2(0)=0$</span> we get <span class="math-container">$ϕ_2'ϕ_1\geq ϕ_1'ϕ_2.$</span> Using our previous equation for <span class="math-container">$ϕ_i',$</span> this means <span class="math-container">$ρ_2(t)\geq ρ_1(t).$</span></p> Answer 34: <p>Note that $$\begin{align}\frac d{dt}\left[(\rho_2-\rho_1)e^F\right]&amp;=(\rho'_2-\rho'_1)e^F+(\rho_2-\rho_1)(\rho_2+\rho_1)e^F\\ &amp;=e^F\left(\rho'_2-\rho'_1+\rho_2^2-\rho_1^2\right) \end{align}$$ and since $e^F&gt;0$, your inequality implies that the <em>derivative</em> of $(\rho_2-\rho_1)e^F$ is positive. So the function $(\rho_2-\rho_1)e^F$ is increasing.</p> <p>On the other hand, a careless approach <strong>with some possible errors</strong> could be this: $$\begin{align} \rho_1'+ \rho_1^2 \leq \rho_2'+ \rho_2^2 &amp;\implies\rho_1'-\rho_2' \leq \rho_2^2- \rho_1^2\\ &amp;\implies-\dfrac{\rho_1'-\rho_2'}{\rho_1-\rho_2}\ge\rho_1+\rho_2\\ &amp;\implies-\ln|\rho_1-\rho_2|\ge F\\ &amp;\implies\frac 1{|\rho_1-\rho_2|}\le e^F\implies(\rho_2-\rho_1)e^F\ge 1 \end{align}$$ or something like that....</p>
Distinguishing field extensions of $\mathbb{Q}$ of degree $3$.
2,090,400
8
https://math.stackexchange.com/questions/2090400/distinguishing-field-extensions-of-mathbbq-of-degree-3
<p>I came across the following question in an old qualifying exam: </p> <p><em>Find a Galois extension of $\mathbb{Q}$ with Galois group $\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$</em>. </p> <p>One very simple way to approach this problem is to find two distinct polynomials of degree $3$ with Galois groups $A_3 \cong \mathbb{Z}/3\mathbb{Z}$, and show that the field extensions they generate do not coincide. Then the Galois group must be $\mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$ by <a href="https://books.google.com/books?id=Fge-BwqhqIYC&amp;lpg=PA476&amp;ots=ILscV7Bay2&amp;dq=lang%20algebra&amp;pg=PA267#v=onepage&amp;q&amp;f=false" rel="noreferrer">Lang Th. VI.1.14</a>.</p> <p>There are an abundance of such polynomials; e.g. if $a = k + k^2 + 7$ for some integer then, $X^3 -aX + a$ does the trick (see <a href="http://www.math.uconn.edu/~kconrad/blurbs/galoistheory/cubicquartic.pdf" rel="noreferrer">Conrad</a>). Unfortunately, I have no clue on how to show that two such field extensions do not coincide, except for possibly explicitly finding the roots of the two polynomials, and then trying to derive a contradiction trying to express a root of one polynomial in terms of the roots of the other. However, doing this would make me very sad, thus my question is: </p> <blockquote> <p>How do I show (elegantly) that two field extension of $\mathbb{Q}$ of degree $3$ do not coincide? </p> </blockquote>
[ { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" }, { "answer_id": 2090910, "score": 7, "text": "<p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\\Bbb Q(\\zeta_7)\\supset\\Bbb Q$ and $\\Bbb Q(\\zeta_9)\\supset\\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\\Bbb Q$. Since $\\Bbb Q(\\zeta_7)$ is ramified only at $7$ and $\\Bbb Q(\\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p>\n" }, { "answer_id": 2090530, "score": 3, "text": "<p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p>\n\n<p>Recall that if $f$ is an irreducible polynomial over $\\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p>\n\n<p>$$ \\operatorname{disc}(f) = s^2 \\operatorname{disc}(K) $$</p>\n\n<p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p>\n\n<p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p>\n\n<ul>\n<li>Obtain the actual field discriminant</li>\n<li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\\operatorname{disc}(K) \\neq 1$ for every number field $K \\neq \\mathbb{Q}$).</li>\n</ul>\n\n<hr>\n\n<p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\\mathcal{O}_K$ and $p\\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p>\n\n<p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\\mathbf{F}_p$ but $g$ has no roots in $\\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p>\n\n<p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p>\n" }, { "answer_id": 2093431, "score": 1, "text": "<p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p>\n\n<p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \\to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\\mu_p)$, where $\\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \\to Hom(G_K, C_p)$. For any Galois module $X$, $\\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\\delta, f) \\in \\Delta \\times Hom(G_K, X) \\to f^{\\delta}$ given by $f^{\\delta}(x)= \\delta (f(\\delta^{-1}(x))$ for $x \\in X$ . Note that this action is the only one which is functorial; here $\\Delta$ acts trivially on $C_p$. Since $\\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\\cong Hom(G_K, C_p)^{\\Delta}$, where the superscript $(.)^{\\Delta}$ denotes the invariants under $\\Delta$. Since $G_K$ acts trivially on $\\mu_p$, one has $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)$ as groups, but not as $\\Delta$-modules because of the canonical action of $\\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\\cong Hom(G_K,\\mu_p)(-1)$, where $(.)(-1)$ denotes the so called \"Tate twist\", which means that the Galois action on $Hom(G_K,\\mu_p)$ has been replaced by a new action defined by $\\delta^{new} (f)= \\kappa(\\delta)^{-1}.\\delta^{old}(f)$, where $\\kappa$ is the mod $p$ character defined by $\\delta(\\zeta)=\\zeta^{\\kappa(\\delta)}$ for $\\zeta \\in \\mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\\in Hom(G_K,\\mu_p)$ such that $\\delta (f)=\\kappa(\\delta).f$ (these can be viewed as \"eigenvectors\" corresponding to the \"eigenvalues\" $\\kappa(\\delta)$.)</p>\n\n<p>Now, by Kummer theory, $Hom(G_K,\\mu_p)\\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\\sqrt [p]a)$, where $a$ represents a class $\\bar a \\in (K^{*}/K^{*})^p$, and two such extensions $K(\\sqrt [p]a)$ and $K(\\sqrt [p]b)$ coincide iff $\\bar a = \\bar b^j$ for a certain $j\\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\\sqrt [p]a)$ and $L'.K = K(\\sqrt [p]b)$ coincide, which means that $\\delta(\\bar a)=\\kappa(\\delta).\\bar a$ and $\\bar a = \\bar b^j$, $j\\neq 0$ mod $p$.</em> This is an \"elegant\" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\\sqrt [3] a)$, with $a \\in Q^{*}$, they even start in an obvious way ./. </p>\n" } ]
Question: Distinguishing field extensions of $\mathbb{Q}$ of degree $3$. Answer 1: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 2: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 3: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 4: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 5: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 6: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 7: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 8: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 9: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 10: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 11: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 12: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 13: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 14: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 15: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 16: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 17: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 18: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 19: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 20: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 21: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 22: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 23: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 24: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 25: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 26: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 27: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 28: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 29: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 30: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 31: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 32: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 33: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 34: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 35: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 36: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 37: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 38: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 39: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 40: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 41: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 42: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 43: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 44: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 45: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 46: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 47: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 48: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p> Answer 49: <p>If I were answering the exam question, I would have approached the question a little more computationally and specifically. The extensions $\Bbb Q(\zeta_7)\supset\Bbb Q$ and $\Bbb Q(\zeta_9)\supset\Bbb Q$ are both of degree six, with cyclic Galois group, so each has a subfield cubic over $\Bbb Q$. Since $\Bbb Q(\zeta_7)$ is ramified only at $7$ and $\Bbb Q(\zeta_9)$ is ramified only at $3$, the cubic extensions also are ramified only at $7$ and $3$, respectively. So they’re different.</p> Answer 50: <p>Many pairs fields can be distinguished by their discriminant. And knowing the discriminant of a defining polynomial instead of that of the field is often good enough.</p> <p>Recall that if $f$ is an irreducible polynomial over $\mathbb{Q}$ that defines a number field $K$, then there exists some integer $s$ such that</p> <p>$$ \operatorname{disc}(f) = s^2 \operatorname{disc}(K) $$</p> <p>Therefore, if you have two irreducible polynomials $f$ and $g$ such that the ratio of their discriminants is not a square, then they cannot define the same field.</p> <p>Although, as pointed out in the comments, cubics with galois group $A_3$ are precisely those whose discriminant is a square, so for this approach to work you have to do more than look at the discriminant of a single defining polynomial; e.g.</p> <ul> <li>Obtain the actual field discriminant</li> <li>Find polynomials defining the two fields whose discriminants are relatively prime (exploiting the fact that $\operatorname{disc}(K) \neq 1$ for every number field $K \neq \mathbb{Q}$).</li> </ul> <hr> <p>Another simple test is to factor an integer. If $K$ and $L$ are two fields such that $p\mathcal{O}_K$ and $p\mathcal{O}_L$ have factorizations into ideals that don't have the same pattern of degrees and ramification indices, then they aren't the same field.</p> <p>For example, if the discriminants of $f$ and $g$ are cubics with galois group $A_3$ and both have discriminant relatively prime to $p$, and $f$ has three roots in $\mathbf{F}_p$ but $g$ has no roots in $\mathbf{F}_p$, then the fields defined by $f$ and $g$ are distinct.</p> <p>In that case, in the field defined by $f$, $(p)$ factors into a product of three ideals of degree $1$, but in the field defined by $g$, $(p)$ is a prime ideal.</p> Answer 51: <p>In your question at the end, you do not specify wheteher your cubic fields are Galois above $Q$, but the whole context of your post suggests that they are. In this case your problem admits a purely Galois theoretic solution (arithmetic will come into play only over $Q$).</p> <p>Let $p$ be an odd (for simplification) prime and $k$ be a field of characteristic $\neq p $. Denote by $G_k$ the absolute Galois group of $k$. Any cyclic extension of $k$ of degree $p$ (say a $C_p$- extension for short) is the fixed field of a subgroup of index $p$ of $G_k$, and any such subgroup is the kernel of a non trivial homomorphism $G_k \to C_p$, so that the $C_p$-extensions of $k$ are classified by the non trivial elements of the group $Hom(G_k, C_p)$. To progress further, introduce the field $K = k(\mu_p)$, where $\mu_p$ is the group of $p$-th roots of unity, and consider the restriction map $Hom(G_k, C_p) \to Hom(G_K, C_p)$. For any Galois module $X$, $\Delta=Gal(K/k)$ acts on $Hom (G_K , X)$ in the usual way, more precisely via $(\delta, f) \in \Delta \times Hom(G_K, X) \to f^{\delta}$ given by $f^{\delta}(x)= \delta (f(\delta^{-1}(x))$ for $x \in X$ . Note that this action is the only one which is functorial; here $\Delta$ acts trivially on $C_p$. Since $\Delta$ has order prime to $p$, it is classically known that $Hom(G_k, C_p)\cong Hom(G_K, C_p)^{\Delta}$, where the superscript $(.)^{\Delta}$ denotes the invariants under $\Delta$. Since $G_K$ acts trivially on $\mu_p$, one has $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)$ as groups, but not as $\Delta$-modules because of the canonical action of $\Delta$ defined above. Actually, one can check easily that $Hom(G_K, C_p)\cong Hom(G_K,\mu_p)(-1)$, where $(.)(-1)$ denotes the so called "Tate twist", which means that the Galois action on $Hom(G_K,\mu_p)$ has been replaced by a new action defined by $\delta^{new} (f)= \kappa(\delta)^{-1}.\delta^{old}(f)$, where $\kappa$ is the mod $p$ character defined by $\delta(\zeta)=\zeta^{\kappa(\delta)}$ for $\zeta \in \mu_p$ . Consequently, $Hom(G_k, C_p)$ can be identied with the elements $f\in Hom(G_K,\mu_p)$ such that $\delta (f)=\kappa(\delta).f$ (these can be viewed as "eigenvectors" corresponding to the "eigenvalues" $\kappa(\delta)$.)</p> <p>Now, by Kummer theory, $Hom(G_K,\mu_p)\cong (K^{*}/K^{*})^p$ , which means more concretely that any $C_p$-extension of $K$ is of the form $K(\sqrt [p]a)$, where $a$ represents a class $\bar a \in (K^{*}/K^{*})^p$, and two such extensions $K(\sqrt [p]a)$ and $K(\sqrt [p]b)$ coincide iff $\bar a = \bar b^j$ for a certain $j\neq 0$ mod $p$. Summarizing : <em>two $C_p$-extensions $L$ and $L'$ of $k$ coincide iff the composite fields $L.K = K(\sqrt [p]a)$ and $L'.K = K(\sqrt [p]b)$ coincide, which means that $\delta(\bar a)=\kappa(\delta).\bar a$ and $\bar a = \bar b^j$, $j\neq 0$ mod $p$.</em> This is an "elegant" theoretical way to distinguish between two $C_p$-extensions of $k$, but of course, in concrete examples, it requires some computations - although in your paticular case $k=Q$ and $p=3$, these do not seem especially complicated. For pure cubic fields, i.e. of the form $Q(\sqrt [3] a)$, with $a \in Q^{*}$, they even start in an obvious way ./. </p>
How to factorize this $\sqrt{8 - 2\sqrt{7}}$?
2,090,836
8
https://math.stackexchange.com/questions/2090836/how-to-factorize-this-sqrt8-2-sqrt7
<p>When I was at high school, our teacher showed us a technique to simplify square <br>roots like this $\sqrt{8 - 2\sqrt{7}}$ that I forgot.<br> It was something like 8 = 7+1; 7 = 7*1; and using them we could represent $\sqrt{8 - 2\sqrt{7}}$ in simpler form. I would be happy if you can show how it works, and how this technique is called.</p>
[ { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" }, { "answer_id": 2090837, "score": 6, "text": "<p>HINT:</p>\n\n<p>$$\n8=1^2+(\\sqrt{7})^2\n$$</p>\n\n<p>and $(a-b)^2=a^2+b^2-2ab$.</p>\n" }, { "answer_id": 2090845, "score": 3, "text": "<p>$$\\sqrt { 8-2\\sqrt { 7 } } =\\sqrt { { \\left( \\sqrt { 7 } \\right) }^{ 2 }-2\\sqrt { 7 } +1 } =\\sqrt { { \\left( \\sqrt { 7 } -1 \\right) }^{ 2 } } =\\sqrt { 7 } -1$$</p>\n" }, { "answer_id": 2090872, "score": 3, "text": "<p>Maybe that the formula that you do not remember is:\n$$\n\\sqrt{a\\pm \\sqrt{b}}=\\sqrt{\\dfrac{a+ \\sqrt{a^2-b}}{2}}\\pm\\sqrt{\\dfrac{a- \\sqrt{a^2-b}}{2}}\n$$\nthat can easily verified ( see my answer to the similar question: <a href=\"https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914\">Denesting a square root: $\\sqrt{7 + \\sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p>\n\n<p>In this case we have:\n$$\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{8 - \\sqrt{28}} \\quad \\Rightarrow \\quad a^2-b=36$$</p>\n\n<p>and, using the formula:\n$$\n\\sqrt{8 - 2\\sqrt{7}}=\\sqrt{7}-1\n$$</p>\n" }, { "answer_id": 2090848, "score": 2, "text": "<p>Try making perfect square inside of that square root, as you can see that $$2\\sqrt{7}=2\\times1\\times\\sqrt{7}$$ and $$8=1+(\\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p>\n" }, { "answer_id": 2090857, "score": 0, "text": "<p>Such square roots can be <em>mechanically</em> computed by a <a href=\"https://math.stackexchange.com/a/816527/242\">Simple Denesting Rule:</a></p>\n\n<p>Here $\\ 8-2\\sqrt 7\\ $ has norm $= 36.\\:$ $\\rm\\ \\color{blue}{Subtracting\\ out}\\,\\ \\sqrt{norm}\\ = 6\\,\\ $ yields $\\,\\ 2-2\\sqrt 7\\:$</p>\n\n<p>which has $\\, {\\rm\\ \\sqrt{trace}}\\, =\\, \\sqrt{4}\\, =\\, 2.\\,\\ \\ \\ \\rm \\color{brown}{Dividing\\ it\\ out}\\,\\ $ of the above yields $\\ \\ \\ 1-\\sqrt 7$ </p>\n\n<p>Negate it to get the positive root. Many more worked examples are in <a href=\"https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace\">prior posts on this rule.</a></p>\n" }, { "answer_id": 2092919, "score": 0, "text": "<p>Consider this:$$\\sqrt{X\\pm Y}=\\sqrt{\\dfrac {X+\\sqrt{X^2-Y^2}}2}\\pm\\sqrt{\\dfrac {X-\\sqrt{X^2-Y^2}}2}\\tag1$$\nFor $X,Y\\in\\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\\ Y=\\sqrt{28}$ so$$\\sqrt{8-2\\sqrt7}=\\sqrt{\\dfrac {8+\\sqrt{64-28}}2}-\\sqrt{\\dfrac {8-\\sqrt{64-28}}2}=\\sqrt7-1\\tag2$$</p>\n" } ]
Question: How to factorize this $\sqrt{8 - 2\sqrt{7}}$? Answer 1: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 2: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 3: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 4: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 5: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 6: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 7: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 8: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 9: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 10: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 11: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 12: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 13: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 14: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 15: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 16: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 17: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 18: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 19: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 20: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 21: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 22: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 23: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 24: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 25: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 26: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 27: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 28: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 29: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 30: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 31: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 32: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 33: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 34: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 35: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 36: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 37: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 38: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 39: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 40: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 41: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 42: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 43: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 44: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 45: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 46: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 47: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 48: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 49: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 50: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 51: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 52: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 53: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 54: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 55: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 56: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 57: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 58: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 59: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 60: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 61: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 62: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 63: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 64: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 65: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 66: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 67: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 68: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 69: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 70: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 71: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 72: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 73: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 74: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 75: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 76: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 77: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 78: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 79: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 80: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 81: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 82: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 83: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 84: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 85: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 86: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 87: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 88: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 89: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 90: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 91: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 92: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 93: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 94: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 95: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 96: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p> Answer 97: <p>HINT:</p> <p>$$ 8=1^2+(\sqrt{7})^2 $$</p> <p>and $(a-b)^2=a^2+b^2-2ab$.</p> Answer 98: <p>$$\sqrt { 8-2\sqrt { 7 } } =\sqrt { { \left( \sqrt { 7 } \right) }^{ 2 }-2\sqrt { 7 } +1 } =\sqrt { { \left( \sqrt { 7 } -1 \right) }^{ 2 } } =\sqrt { 7 } -1$$</p> Answer 99: <p>Maybe that the formula that you do not remember is: $$ \sqrt{a\pm \sqrt{b}}=\sqrt{\dfrac{a+ \sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a- \sqrt{a^2-b}}{2}} $$ that can easily verified ( see my answer to the similar question: <a href="https://math.stackexchange.com/questions/1214527/denesting-a-square-root-sqrt7-sqrt14/1454914#1454914">Denesting a square root: $\sqrt{7 + \sqrt{14}}$</a>) and works well when $a^2-b$ is a perfect square.</p> <p>In this case we have: $$\sqrt{8 - 2\sqrt{7}}=\sqrt{8 - \sqrt{28}} \quad \Rightarrow \quad a^2-b=36$$</p> <p>and, using the formula: $$ \sqrt{8 - 2\sqrt{7}}=\sqrt{7}-1 $$</p> Answer 100: <p>Try making perfect square inside of that square root, as you can see that $$2\sqrt{7}=2\times1\times\sqrt{7}$$ and $$8=1+(\sqrt{7})^2$$ Then apply $$a^2+b^2-2ab=(a-b^2)$$</p> Answer 101: <p>Such square roots can be <em>mechanically</em> computed by a <a href="https://math.stackexchange.com/a/816527/242">Simple Denesting Rule:</a></p> <p>Here $\ 8-2\sqrt 7\ $ has norm $= 36.\:$ $\rm\ \color{blue}{Subtracting\ out}\,\ \sqrt{norm}\ = 6\,\ $ yields $\,\ 2-2\sqrt 7\:$</p> <p>which has $\, {\rm\ \sqrt{trace}}\, =\, \sqrt{4}\, =\, 2.\,\ \ \ \rm \color{brown}{Dividing\ it\ out}\,\ $ of the above yields $\ \ \ 1-\sqrt 7$ </p> <p>Negate it to get the positive root. Many more worked examples are in <a href="https://math.stackexchange.com/search?q=user%3A242+denesting+subtract+norm+trace">prior posts on this rule.</a></p> Answer 102: <p>Consider this:$$\sqrt{X\pm Y}=\sqrt{\dfrac {X+\sqrt{X^2-Y^2}}2}\pm\sqrt{\dfrac {X-\sqrt{X^2-Y^2}}2}\tag1$$ For $X,Y\in\mathbb{R}$ and $X&gt;Y$. Therefore, we have $x=8,\ Y=\sqrt{28}$ so$$\sqrt{8-2\sqrt7}=\sqrt{\dfrac {8+\sqrt{64-28}}2}-\sqrt{\dfrac {8-\sqrt{64-28}}2}=\sqrt7-1\tag2$$</p>
Why does combining $\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$ into a $dx\,dy$ integral work?
2,091,140
8
https://math.stackexchange.com/questions/2091140/why-does-combining-int-infty-infty-e-x2-dx-int-infty-infty-e
<p>I was looking here: <a href="https://math.stackexchange.com/questions/552218/how-to-compute-the-integral-int-infty-infty-e-x2-dx">How to compute the integral $\int_{-\infty}^\infty e^{-x^2}\,dx$?</a> to find out how to evaluate <span class="math-container">$\displaystyle\int\limits_{-\infty}^{\infty} e^{-x^2} dx$</span>, and didn't understand why <span class="math-container">$$\left(\displaystyle\int\limits_{-\infty}^{\infty} e^{-x^2} dx\right)\left(\int\limits_{-\infty}^{\infty} e^{-y^2} dy\right)=\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty} e^{-x^2} e^{-y^2} dx\thinspace dy$$</span> I know you can use Fubini's Theorem from this: <a href="https://math.stackexchange.com/questions/1426891/why-does-left-int-infty-inftye-t2-dt-right2-int-infty-i">Why does $\left(\int_{-\infty}^{\infty}e^{-t^2} dt \right)^2= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy$?</a>, but I'm still confused about how exactly you can just multiply two integrals together like that.</p> <p>A detailed answer would be very nice.</p> <p>Thanks!</p>
[ { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" }, { "answer_id": 2091155, "score": 7, "text": "<p>$$\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-x^2}e^{-y^2} dx dy$$</p>\n\n<p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-y^2} \\int_{-\\infty}^{\\infty} e^{-x^2} dx dy$$</p>\n\n<p>Now because $\\int_{-\\infty}^{\\infty} e^{-x^2} dx$ is some constant we can pull it out,</p>\n\n<p>$$=\\int_{-\\infty}^{\\infty} e^{-x^2} dx\\int_{-\\infty}^{\\infty} e^{-y^2} dy$$</p>\n\n<p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p>\n\n<p>$$\\int_{a}^{b} \\int_{c}^{d} g(x,y) dx dy=\\int_{a}^{b} h(y) dy \\int_{c}^{d} f(x) dx$$</p>\n\n<p>Provided everything we write down exists.</p>\n" }, { "answer_id": 2091166, "score": 5, "text": "<p>Fubini-Tonelli theorem implies that if $\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy |f(x,y)|\\right) &lt; \\infty$ then $$\\int_{\\mathbb{R}^2} f(x,y)dxdy = \\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy f(x,y)\\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is\n$$\n\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\\int_{\\mathbb{R}} dy e^{-y^2}.\n$$\nBy comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\\int_{\\mathbb{R}} e^{-y^2}dy = C &lt; \\infty$$ so that $$\\int_{\\mathbb{R}} dx \\left(\\int_{\\mathbb{R}} dy e^{-(x^2+y^2)}\\right) = C\\int_{\\mathbb{R}} dx e^{-x^2} = C^2 &lt; \\infty.$$</p>\n\n<p>Thus the theorem gives\n$$\n\\int_{\\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2\n$$\nwhere $$ C = \\int_{\\mathbb{R}} dx e^{-x^2}.$$</p>\n\n<p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\\pi.$</p>\n" }, { "answer_id": 2091199, "score": 3, "text": "<p>The integral $\\displaystyle \\int_{-\\infty}^\\infty e^{-y^2} \\,dy$ does not change as $x$ goes from $-\\infty$ to $\\infty$, so you have\n\\begin{align}\n\\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\int_{-\\infty}^\\infty e^{-y^2} \\, dy &amp; = \\int_{-\\infty}^\\infty e^{-x^2} \\, dx \\cdot \\text{constant} \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2}\\cdot\\text{constant} \\right)\\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right)\\,dx\n\\end{align}\nThe factor $e^{-x^2}$ does not change as $y$ goes from $-\\infty$ to $\\infty$, so you now have\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left( e^{-x^2} \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\, dx &amp;= \\int_{-\\infty}^\\infty \\left( \\text{constant} \\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty \\text{constant} \\cdot e^{-y^2}\\,dy \\right) \\,dx \\\\[10pt]\n&amp; = \\int_{-\\infty}^\\infty \\left( \\int_{-\\infty}^\\infty e^{-x^2} \\cdot e^{-y^2}\\,dy \\right) \\,dx\n\\end{align}</p>\n" }, { "answer_id": 2091158, "score": 1, "text": "<p>Sorry This is not a complete answer but i hope it can help you </p>\n\n<p>if we integral of dx then $e^{-y^2}$ assume constant \n$$\\displaystyle\\int\\limits_{y=-\\infty}^{y=\\infty}\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} e^{-y^2} dx\\thinspace dy=\\Big(\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2} dy\\Big)\\Big(\\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx\\Big)=\\thinspace\\int\\limits_{y=-\\infty}^{y=\\infty} e^{-y^2}dy \\displaystyle\\int\\limits_{x=-\\infty}^{x=\\infty} e^{-x^2} dx $$ </p>\n" } ]
Question: Why does combining $\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$ into a $dx\,dy$ integral work? Answer 1: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 2: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 3: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 4: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 5: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 6: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 7: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 8: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 9: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 10: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 11: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 12: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 13: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 14: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 15: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 16: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 17: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 18: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 19: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 20: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 21: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 22: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 23: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 24: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 25: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 26: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 27: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 28: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 29: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 30: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 31: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 32: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 33: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 34: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 35: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 36: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 37: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 38: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 39: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 40: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 41: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 42: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 43: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 44: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 45: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 46: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 47: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 48: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 49: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 50: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 51: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 52: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 53: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 54: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 55: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 56: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 57: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 58: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 59: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 60: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 61: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 62: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 63: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 64: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p> Answer 65: <p>$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$</p> <p>Because we treat $y$ constant in the first, inner, integral we can pull it out.</p> <p>$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$</p> <p>Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,</p> <p>$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$</p> <p>The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,</p> <p>$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$</p> <p>Provided everything we write down exists.</p> Answer 66: <p>Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) &lt; \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C &lt; \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 &lt; \infty.$$</p> <p>Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$</p> <p>The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$</p> Answer 67: <p>The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy &amp; = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &amp;= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] &amp; = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}</p> Answer 68: <p>Sorry This is not a complete answer but i hope it can help you </p> <p>if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$ </p>
Proving $f&#39;(1)$ exist for $f$ satisfying $f(xy)=xf(y)+yf(x)$
2,091,212
8
https://math.stackexchange.com/questions/2091212/proving-f1-exist-for-f-satisfying-fxy-xfyyfx
<p>The problem is: I have a continuous $f: \mathbb R_+ \to \mathbb R$ which satisfies $f(xy)=xf(y)+yf(x)$ for all $x,y \in \mathbb R_+$. I want to prove that $f$ is differentiable.</p> <p>The problem is equivalent to proving that $f'(1)$ exist. I have no idea how to go on.</p>
[ { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" }, { "answer_id": 2091337, "score": 18, "text": "<p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \\frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\\in(0,\\infty)$.</p>\n\n<hr>\n\n<p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p>\n\n<p>Dividing by $xy$ we get the functional equation $$\\frac{f(xy)}{xy} = \\frac{f(x)}{x} + \\frac{f(y)}{y}\\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\\tag{2}$$ We prove the following theorem:</p>\n\n<blockquote>\n <p><strong>Theorem</strong>: <em>If $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ is continuous at a point $c \\in \\mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \\in \\mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \\in \\mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p>\n</blockquote>\n\n<p>First we prove that $g$ is continuous on $\\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \\neq c$. Then we have $$\\lim_{h \\to 0}g(x + h) = \\lim_{h \\to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\\in \\mathbb{Q}\\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p>\n\n<blockquote>\n <p><em>If a continuous function $g:\\mathbb{R}^{+}\\to\\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \\neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\\in\\mathbb{R}^{+}$.</em></p>\n</blockquote>\n\n<p>Let us suppose that $k \\neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \\in \\mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \\{k^{q}: q\\in \\mathbb{Q}\\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p>\n\n<p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p>\n\n<p>It follows that if $k \\neq 1$ then $g(k)\\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\\mathbb{R}$ and hence $g$ is one-one onto function.</p>\n\n<p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have\n\\begin{align}\ng'(x) &amp;= \\lim_{h \\to 0}\\frac{g(x + h) - g(x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x)}{h}\\notag\\\\\n&amp;= \\lim_{h \\to 0}\\frac{g(1 + h/x) - g(1)}{h/x}\\cdot\\frac{1}{x}\\notag\\\\\n&amp;= \\frac{g'(1)}{x}\\notag\n\\end{align}\nWe next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\\lim_{x \\to 1}\\frac{g(x)}{x - 1} = g'(1)\\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\\mathbb{R}\\to\\mathbb{R}^{+}$ such that $g(h(x)) = x, x\\in \\mathbb{R}$ and $h(g(x)) = x, x \\in \\mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\\tag{5}$$ and <a href=\"https://math.stackexchange.com/a/1885860/72031\">it has already been established</a> that under these conditions the limit $$\\lim_{x \\to 0}\\frac{h(x) - 1}{x}\\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \\to 0, t \\to 0$ and thus $$\\lim_{t \\to 0}\\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p>\n\n<p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p>\n\n<hr>\n\n<p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\\int_{1}^{x}\\frac{dt}{t}\\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\\neq 1$ and then we know that $x^{1/n}\\neq 1$ for all positive integers $n$ and also $x^{1/n}\\to 1$ as $n\\to\\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\\cdot\\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\\to\\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\\lim_{n\\to\\infty}n(x^{1/n}-1)\\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\\log x$ and the theory of this function can be developed independently using either <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html\" rel=\"noreferrer\">the integral</a> or <a href=\"http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html\" rel=\"noreferrer\">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p>\n" }, { "answer_id": 2091217, "score": 7, "text": "<p>Divide the functional equation by $xy$ to get</p>\n\n<p>$$\\frac{f(xy)}{xy}=\\frac{f(y)}y+\\frac{f(x)}x$$</p>\n\n<p>Set $g(x)=\\frac{f(x)}x$ so that</p>\n\n<p>$$g(xy)=g(y)+g(x)$$</p>\n\n<p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p>\n\n<p>$$g(x)=a\\log(x)$$</p>\n\n<p>which implies</p>\n\n<p>$$f(x)=ax\\log(x)$$</p>\n" } ]
Question: Proving $f&#39;(1)$ exist for $f$ satisfying $f(xy)=xf(y)+yf(x)$ Answer 1: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 2: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 3: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 4: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 5: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 6: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 7: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 8: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 9: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 10: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 11: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 12: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 13: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 14: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 15: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 16: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 17: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 18: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 19: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 20: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 21: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 22: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 23: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 24: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 25: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 26: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 27: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 28: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 29: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 30: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 31: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 32: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p> Answer 33: <p>Using integrals the job is easy. Since $f$ is continuous the anti-derivarive of $f$, say $F$ exists. Now integrate the functional equation with respect to $y$ on interval $[1,2]$ to get $$\frac{F(2x)-F(x)}{x}=x(F(2)-F(1))+ \frac{3}{2}f(x)$$ Since $F'$ exists it now follows that $f'(x) $ exists for all $x\in(0,\infty)$.</p> <hr> <p>The problem is difficult if we stick to differential calculus only. I have managed to make some progress to solve the problem using methods of differential calculus and I present the same below.</p> <p>Dividing by $xy$ we get the functional equation $$\frac{f(xy)}{xy} = \frac{f(x)}{x} + \frac{f(y)}{y}\tag{1}$$ and if we put $g(x) = f(x)/x$ then $$g(xy) = g(x) + g(y)\tag{2}$$ We prove the following theorem:</p> <blockquote> <p><strong>Theorem</strong>: <em>If $g:\mathbb{R}^{+}\to\mathbb{R}$ is continuous at a point $c \in \mathbb{R}^{+}$ and satisfies $g(xy) = g(x) + g(y)$ for all $x, y \in \mathbb{R}^{+}$ then $g(x)$ is differentiable for all $x \in \mathbb{R}^{+}$ and $g'(x) = g'(1)/x$.</em></p> </blockquote> <p>First we prove that $g$ is continuous on $\mathbb{R}^{+}$. Let $x &gt; 0$ and $x \neq c$. Then we have $$\lim_{h \to 0}g(x + h) = \lim_{h \to 0}g(x/c)+g (c + ch/x) = g(x/c)+g(c) = g(x)$$ From the functional equation the following properties are obvious $$g(1) = 0, g(1/x) = -g(x), g(x/y) = g(x) - g(y), g(x^{q}) = qg(x), q\in \mathbb{Q}\tag{3}$$ Next we prove that $x = 1$ is the only solution for $g(x) = 0$. More precisely we prove that</p> <blockquote> <p><em>If a continuous function $g:\mathbb{R}^{+}\to\mathbb{R}$ satisfies functional equation $(2)$ and there is a number $k \neq 1$ such that $g(k) = 0$ then $g(x) = 0$ for all $x\in\mathbb{R}^{+}$.</em></p> </blockquote> <p>Let us suppose that $k \neq 1$ and $g(k) = 0$ and then by $(3)$ we have $g(k^{q}) = 0$ for all $q \in \mathbb{Q}$. Now it can be proved with some effort that <em>every positive real number is an accumulation point of set $A = \{k^{q}: q\in \mathbb{Q}\}$</em> or in other words if we take any positive real number $x$ then every neighborhood of $x$ contains points of the form $k^{q}$. It now follows by sign preserving nature of continuous functions that $g(x) = 0$ for all positive $x$.</p> <p>So one solution of the functional equation is the constant $0$ function and apart from this trivial solution any other $g$ is such that $g(x) = 0\Rightarrow x = 1$. From the relation $g(x/y) = g(x) - g(y)$ it then follows that $g$ is one-one function. We further note that <em>any continuous one one function is strictly monotone</em> (this is an easy consequence of intermediate value property of continuous functions) and hence $g$ is strictly monotone.</p> <p>It follows that if $k \neq 1$ then $g(k)\neq 0$ and $g(k^{n}) = ng(k)$ for all integers $n$ so that function $g$ can take arbitrarily large negative and positive values. By intermediate value property it follows that range of $g$ is $\mathbb{R}$ and hence $g$ is one-one onto function.</p> <p>Next we prove that if $g'(1)$ exists then $g'(x) = g'(1)/x$. Clearly we have \begin{align} g'(x) &amp;= \lim_{h \to 0}\frac{g(x + h) - g(x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x)}{h}\notag\\ &amp;= \lim_{h \to 0}\frac{g(1 + h/x) - g(1)}{h/x}\cdot\frac{1}{x}\notag\\ &amp;= \frac{g'(1)}{x}\notag \end{align} We next need to show that $g'(1)$ exists or more explicitly the following limit exists $$\lim_{x \to 1}\frac{g(x)}{x - 1} = g'(1)\tag{4}$$ I have not been able to prove the existence of this limit in a direct manner. Rather I use the fact that $g$ is invertible and thus there is a function $h:\mathbb{R}\to\mathbb{R}^{+}$ such that $g(h(x)) = x, x\in \mathbb{R}$ and $h(g(x)) = x, x \in \mathbb{R}^{+}$. Moreover because of equation $(2)$ the function $h$ satisfies the functional equation $$h(x+y) = h(x)h(y)\tag{5}$$ and <a href="https://math.stackexchange.com/a/1885860/72031">it has already been established</a> that under these conditions the limit $$\lim_{x \to 0}\frac{h(x) - 1}{x}\tag{6}$$ exists and is non-zero unless $h$ is constant. Putting $h(x) - 1 = t$ we see that $x = g(1 + t)$ and as $x \to 0, t \to 0$ and thus $$\lim_{t \to 0}\frac{g(1 + t)}{t}$$ exists and thus the limit $g'(1)$ in equation $(4)$ exists.</p> <p>It is now clear that $f(x) = xg(x)$ is differentiable for all $x &gt; 0$.</p> <hr> <p>In the above we have only discussed that if a function $g$ satisfies the equation $(2)$ and is continuous then it is differentiable also. With slightly more effort we can show that such functions actually exist. One way is to integrate the equation $g'(x) = g'(1)/x$ to get $$g(x) = g'(1)\int_{1}^{x}\frac{dt}{t}\tag{7}$$ so that any multiple of the above integral is a solution for $(2)$. There is another way to find an expression for $g$. Let $x\neq 1$ and then we know that $x^{1/n}\neq 1$ for all positive integers $n$ and also $x^{1/n}\to 1$ as $n\to\infty $. We then have $$g(x) =ng(x^{1/n})=n(x^{1/n}-1)\cdot\frac{g(x^{1/n})}{x^{1/n}-1}$$ Taking limits as $n\to\infty$ and noting that fraction on right in above equation tends to $g'(1)$ we can see that $$g(x) = g'(1)\lim_{n\to\infty}n(x^{1/n}-1)\tag{8}$$ Thus the solution $g$ is always a constant multiple of the limit in above equation. Both the integral in $(7)$ and the limit in $(8)$ are equivalent expressions for the logarithm function conventionally denoted by $\log x$ and the theory of this function can be developed independently using either <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-1.html" rel="noreferrer">the integral</a> or <a href="http://paramanands.blogspot.com/2014/05/theories-of-exponential-and-logarithmic-functions-part-2_10.html" rel="noreferrer">the limit approach</a>. The current answer shows that using functional equation $(2)$ and assumption of continuity we can discover a very useful and important function called logarithm.</p> Answer 34: <p>Divide the functional equation by $xy$ to get</p> <p>$$\frac{f(xy)}{xy}=\frac{f(y)}y+\frac{f(x)}x$$</p> <p>Set $g(x)=\frac{f(x)}x$ so that</p> <p>$$g(xy)=g(y)+g(x)$$</p> <p>This is actually well known, and if $g(x)$ is continuous, then the solution is given by</p> <p>$$g(x)=a\log(x)$$</p> <p>which implies</p> <p>$$f(x)=ax\log(x)$$</p>
Prove that $\frac{x_1}{x_2}+\frac{x_2}{x_3}+\frac{x_3}{x_4}+\cdots +\frac{x_{n-1}}{x_n}+\frac{x_n}{x_1}&lt;2n-1$
2,091,511
8
https://math.stackexchange.com/questions/2091511/prove-that-fracx-1x-2-fracx-2x-3-fracx-3x-4-cdots-fracx-n-1
<p>Let $x_1,x_2,x_3,\cdots ,x_n (n\ge2)$ be real numbers greater than $1.$ Suppose that $|x_i-x_{i+1}|&lt;1$ for $i=1,2,3,\cdots,(n-1)$. </p> <blockquote> <p>Prove that $$\frac{x_1}{x_2}+\frac{x_2}{x_3}+\frac{x_3}{x_4}+\cdots +\frac{x_{n-1}}{x_n}+\frac{x_n}{x_1}&lt;2n-1$$</p> </blockquote> <p>Please help!!!</p>
[ { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" }, { "answer_id": 2091705, "score": 5, "text": "<p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \\ldots, n-1$. Then\n$$\n \\frac{x_n}{x_1} = 1 + \\sum_{i=1}^{n-1} \\frac{d_i}{x_1} \\quad , \n\\quad \\frac{x_i}{x_{i+1}} = 1 - \\frac{d_i}{x_{i+1}}\n$$\nand therefore\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} = \n n + \\sum_{i=1}^{n-1} d_i \\left( \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right) \\\\\n \\le n + \\sum_{i=1}^{n-1} \\lvert d_i \\lvert \\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert &lt; n + (n-1) = 2n-1\n$$\nbecause</p>\n\n<ul>\n<li>$\\lvert d_i \\lvert &lt; 1$ and</li>\n<li>all numbers $1/x_i$ are in the\ninterval $(0, 1]$ so that the (absolute value of) any difference\n$1/{x_1} - 1/{x_{i+1}}$ is less than one.</li>\n</ul>\n\n<hr>\n\n<p>The inequality can actually be improved a bit.\nFor fixed $i$ set $m = \\min(x_1, x_{i+1})$ and $M = \\max(x_1, x_{i+1})$.\nThen $M &lt; m + i$ and therefore\n$$\n\\left\\lvert \\frac{1}{x_1} - \\frac{1}{x_{i+1}} \\right \\rvert\n = \\frac 1m - \\frac 1M &lt; \\frac 1m - \\frac{1}{m+i}\n = \\frac{i}{m(m+i)} \\le \\frac{i}{i+1}\n$$\nand the above method gives\n$$\n\\frac{x_n}{x_1} + \\sum_{i=1}^{n-1} \\frac{x_i}{x_{i+1}} \n&lt; n + \\sum_{i=1}^{n-1}\\frac{i}{i+1} \n= (2n-1) - \\sum_{i=2}^{n} \\frac 1i \\, .\n$$\nChoosing $x_i = i + (i-1) \\varepsilon$ shows that this bound is sharp.</p>\n" }, { "answer_id": 2091555, "score": 4, "text": "<p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\\dfrac{x_n}{x_1}&lt;1$. Then, $$\\dfrac{x_n}{x_1}+\\sum_{i=1}^{n-1}\\dfrac{x_i}{x_{i+1}}&lt;1+\\sum_{i=1}^{n-1}(1+\\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p>\n\n<p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p>\n" }, { "answer_id": 2092767, "score": 2, "text": "<p>Proof: I get this proof for two case</p>\n\n<p>(1): if for all $k=1,2,\\cdots,n-1$,have $a_{k}\\le a_{k+1}$,then we have\n$$a_{k}\\le a_{k+1}&lt;a_{k}+1$$\nso we have\n$$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\\cdots&lt;a_{1}+(i-1)$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p>\n\n<p>(2):define set $A=\\{k|a_{k}&gt;a_{k+1}\\}$,Assmue that $|A|=p$,for $k\\in A$,we have\n$$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\\dfrac{a_{k}}{a_{k+1}}&lt;\\dfrac{a_{k+1}+1}{a_{k+1}}\n&lt;1+\\dfrac{1}{a_{k+1}}&lt;2$$\nthen we have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$\nand\n$$\\dfrac{a_{n}}{a_{1}}=\\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$\nso for this case also have\n$$\\dfrac{a_{1}}{a_{2}}+\\dfrac{a_{2}}{a_{3}}+\\cdots+\\dfrac{a_{n-1}}{a_{n}}+\\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>\n" } ]
Question: Prove that $\frac{x_1}{x_2}+\frac{x_2}{x_3}+\frac{x_3}{x_4}+\cdots +\frac{x_{n-1}}{x_n}+\frac{x_n}{x_1}&lt;2n-1$ Answer 1: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 2: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 3: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 4: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 5: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 6: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 7: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 8: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 9: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 10: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 11: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 12: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 13: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 14: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 15: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 16: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 17: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 18: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 19: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 20: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 21: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 22: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 23: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 24: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 25: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 26: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 27: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 28: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 29: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 30: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 31: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 32: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 33: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 34: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 35: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 36: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 37: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 38: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 39: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 40: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 41: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 42: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 43: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 44: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 45: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 46: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 47: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 48: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p> Answer 49: <p>Set $d_i = x_{i+1} - x_i$ for $i = 1, \ldots, n-1$. Then $$ \frac{x_n}{x_1} = 1 + \sum_{i=1}^{n-1} \frac{d_i}{x_1} \quad , \quad \frac{x_i}{x_{i+1}} = 1 - \frac{d_i}{x_{i+1}} $$ and therefore $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} = n + \sum_{i=1}^{n-1} d_i \left( \frac{1}{x_1} - \frac{1}{x_{i+1}} \right) \\ \le n + \sum_{i=1}^{n-1} \lvert d_i \lvert \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert &lt; n + (n-1) = 2n-1 $$ because</p> <ul> <li>$\lvert d_i \lvert &lt; 1$ and</li> <li>all numbers $1/x_i$ are in the interval $(0, 1]$ so that the (absolute value of) any difference $1/{x_1} - 1/{x_{i+1}}$ is less than one.</li> </ul> <hr> <p>The inequality can actually be improved a bit. For fixed $i$ set $m = \min(x_1, x_{i+1})$ and $M = \max(x_1, x_{i+1})$. Then $M &lt; m + i$ and therefore $$ \left\lvert \frac{1}{x_1} - \frac{1}{x_{i+1}} \right \rvert = \frac 1m - \frac 1M &lt; \frac 1m - \frac{1}{m+i} = \frac{i}{m(m+i)} \le \frac{i}{i+1} $$ and the above method gives $$ \frac{x_n}{x_1} + \sum_{i=1}^{n-1} \frac{x_i}{x_{i+1}} &lt; n + \sum_{i=1}^{n-1}\frac{i}{i+1} = (2n-1) - \sum_{i=2}^{n} \frac 1i \, . $$ Choosing $x_i = i + (i-1) \varepsilon$ shows that this bound is sharp.</p> Answer 50: <p>First, suppose that at least one of the fractions is strictly less than 1. Without loss of generality, we can assume that $\dfrac{x_n}{x_1}&lt;1$. Then, $$\dfrac{x_n}{x_1}+\sum_{i=1}^{n-1}\dfrac{x_i}{x_{i+1}}&lt;1+\sum_{i=1}^{n-1}(1+\dfrac{x_i-x_{i+1}}{x_{i+1}})&lt;1+2(n-1) = 2n-1.$$</p> <p>Now if, all of the fractions are at least 1, then they are all equal to 1 because their product is 1. In that case, the sum is $n$ and readily satisfies $n&lt;2n-1$. </p> Answer 51: <p>Proof: I get this proof for two case</p> <p>(1): if for all $k=1,2,\cdots,n-1$,have $a_{k}\le a_{k+1}$,then we have $$a_{k}\le a_{k+1}&lt;a_{k}+1$$ so we have $$a_{i}&lt;a_{i-1}+1&lt;a_{i-2}+2&lt;\cdots&lt;a_{1}+(i-1)$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;(n-1)+\dfrac{a_{1}+(n-1)}{a_{1}}&lt;(n-1)+1+(n-1)=2n-1$$</p> <p>(2):define set $A=\{k|a_{k}&gt;a_{k+1}\}$,Assmue that $|A|=p$,for $k\in A$,we have $$a_{k+1}&lt;a_{k}&lt;a_{k+1}+1,\dfrac{a_{k}}{a_{k+1}}&lt;\dfrac{a_{k+1}+1}{a_{k+1}} &lt;1+\dfrac{1}{a_{k+1}}&lt;2$$ then we have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}&lt;2p+(n-1-p)=n-1+p$$ and $$\dfrac{a_{n}}{a_{1}}=\dfrac{(a_{n}-a_{n-1})+(a_{n-1}-a_{n-2})+\cdots+(a_{2}-a_{1})+a_{1}}{a_{1}}&lt;\dfrac{(n-1-p)+a_{1}}{a_{1}}&lt;(n-1-p)+1=n-p$$ so for this case also have $$\dfrac{a_{1}}{a_{2}}+\dfrac{a_{2}}{a_{3}}+\cdots+\dfrac{a_{n-1}}{a_{n}}+\dfrac{a_{n}}{a_{1}}&lt;n-1+p+n-p=2n-1$$</p>
Why is $1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots = \frac{\pi}{4}$?
2,092,423
8
https://math.stackexchange.com/questions/2092423/why-is-1-frac13-frac15-frac17-cdots-frac-pi4
<p>I was watching a numberphile video, and it stated that the limit of $1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots$ was $\frac \pi 4.$</p> <p>I was just wondering if anybody could prove this using some terms and math <strong><em>I</em> am likely to understand</strong>. I am a high school student taking Algebra II.</p>
[ { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" }, { "answer_id": 2092577, "score": 12, "text": "<p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p>\n\n<p><a href=\"https://i.stack.imgur.com/2cDAw.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/2cDAw.png\" alt=\"enter image description here\"></a></p>\n\n<p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p>\n\n<p>By Pythagoras, the hypothenuses of the small triangle is $\\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\\sqrt{1+x^2})\\sqrt{1+x^2}$.</p>\n\n<p>So we have established</p>\n\n<p>$$\\frac{g}{h}\\approx\\frac1{1+x^2}.$$</p>\n\n<p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p>\n\n<p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p>\n\n<p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p>\n\n<p>$$y(x)=a+bx+cx^2+dx^3+ex^4\\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p>\n\n<p>$$\\frac gh=\\frac{y(x+h)-y(x)}h=\\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\\cdots}h$$</p>\n\n<p>For example, the cubic term gives, using a remarkable product</p>\n\n<p>$$\\frac{(x+h)^3-x^3}h=\\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p>\n\n<p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p>\n\n<p>$$\\frac gh=b+2cx+3\\,dx^2+4\\,dx^3+5\\,dx^4+\\cdots$$</p>\n\n<p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the \"geometric progression\" formula.</p>\n\n<p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n.$$</p>\n\n<p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p>\n\n<p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n=\\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p>\n\n<p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p>\n\n<p>$$S_n=1-x^2+x^4-\\cdots(-x^2)^n\\approx\\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p>\n\n<p><a href=\"https://i.stack.imgur.com/3Zoh9.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/3Zoh9.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we are able to compare the formulas for $g/h$, and by identification,</p>\n\n<p>$$b=1,c=0,d=-\\frac13,e=0,f=\\frac15,\\cdots$$</p>\n\n<p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series,\n$$y(x)=x-\\frac{x^3}3+\\frac{x^5}5-\\frac{x^7}7+\\cdots$$\n which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p>\n\n<p><a href=\"https://i.stack.imgur.com/H75IC.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/H75IC.png\" alt=\"enter image description here\"></a></p>\n\n<p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p>\n\n<p>$$\\frac\\pi4=1-\\frac13+\\frac15-\\frac17+\\cdots$$</p>\n\n<hr>\n\n<p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the \"polynomial\" expression for $y(x)$ is still valid for $x=1$.</p>\n" }, { "answer_id": 2092432, "score": 3, "text": "<p>If you have covered $$\\arctan x=x-\\dfrac{x^3}{3}+\\dfrac{x^5}{5}-\\dfrac{x^7}{7}+\\cdots+(-1)^{n-1}\\dfrac{x^{2n-1}}{2n-1}+\\cdots\\quad |x|\\le 1$$ then, for $x=1$ $$\\frac{\\pi}{4}=\\arctan 1=1-\\dfrac{1}{3}+\\dfrac{1}{5}-\\dfrac{1}{7}+\\cdots+(-1)^{n-1}\\dfrac{1}{2n-1}+\\cdots$$ $$$$</p>\n" } ]
Question: Why is $1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots = \frac{\pi}{4}$? Answer 1: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 2: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 3: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 4: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 5: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 6: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 7: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 8: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 9: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 10: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 11: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 12: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 13: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 14: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 15: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 16: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 17: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 18: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 19: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 20: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 21: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 22: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 23: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 24: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 25: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 26: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 27: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 28: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 29: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 30: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 31: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 32: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p> Answer 33: <p>I am doing my best to only use simple notions, but the explanation will look very strange and indirect. We will start with geometry.</p> <p><a href="https://i.stack.imgur.com/2cDAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cDAw.png" alt="enter image description here"></a></p> <p>On the figure you see two right triangles, of basis $1$ and height $x$ and $x+h$ respectively. Note that $x$ and $h$ are line segments, while $y$ and $g$ arc circular arcs. We are particularly interested in the relation between the values of $y$ and $x$, which we will study by observing how the height difference $h$ makes the corresponding arc length $g$ vary.</p> <p>By Pythagoras, the hypothenuses of the small triangle is $\sqrt{1+x^2}$, which is also the radius of the large circle. Then the arc facing $g$ on the larger circle has a length of $g\sqrt{1+x^2}$; if you straighten it, it becomes the base of the small outer triangle, which by proportionality has the length $(g\sqrt{1+x^2})\sqrt{1+x^2}$.</p> <p>So we have established</p> <p>$$\frac{g}{h}\approx\frac1{1+x^2}.$$</p> <p>This relation is quite approximate in the figure, but if you make $h$ smaller and smaller, it becomes more and more accurate, because the arcs become more and more straight.</p> <p>Actually, we just computed a <em>derivative</em>, i.e. the ratio of a variation of $y$ (from $y$ to $y+g$) over a variation of $x$ (from $x$ to $x+h$). This is a notion from calculus.</p> <p>Now we want to express the relation between $x$ and $y$ and we will assume that it is of the polynomial type:</p> <p>$$y(x)=a+bx+cx^2+dx^3+ex^4\cdots$$ without specifying the final degree. Using this formula, let us evaluate the ratio $g/h$, i.e.</p> <p>$$\frac gh=\frac{y(x+h)-y(x)}h=\frac{a-a+b(x+h-x)+c((x+h)^2-x^2)+d((x+h)^3-x^3)+\cdots}h$$</p> <p>For example, the cubic term gives, using a remarkable product</p> <p>$$\frac{(x+h)^3-x^3}h=\frac{(x+h-x)((x+h)^2+(x+h)x+x^2)}h=(x+h)^2+(x+h)x+x^2.$$</p> <p>If we again decrease $h$ so that it becomes neglectible compared to $x$, we obtain $3x^3$. Repeating this reasoning with all terms, you can check that we will get the polynomial expression</p> <p>$$\frac gh=b+2cx+3\,dx^2+4\,dx^3+5\,dx^4+\cdots$$</p> <p>We would like to compare it to the previous expression, which unfortunately is not in a polynomial form. We can achieve this goal by using the "geometric progression" formula.</p> <p>Let us consider the numbers $(-x^2)^k$ and compute their sum for $k$ from $0$ to $n$:</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n.$$</p> <p>Then if we multiply the sum by $-x^2$ and subtract, we obtain after a lot of terms cancellation</p> <p>$$(1+x^2)S_n=1-(-x^2)^{n+1},$$ so that</p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n=\frac{1-(-x^2)^{n+1}}{1+x^2}.$$</p> <p>If $x^2&lt;1$, then $(x^2)^n$ becomes smaller and smaller so that we can simplify </p> <p>$$S_n=1-x^2+x^4-\cdots(-x^2)^n\approx\frac1{1+x^2},$$ and the larger the final degree, the better the approximation. The figure shows the matching of the polynomial of degree $10$ with the exact formula.</p> <p><a href="https://i.stack.imgur.com/3Zoh9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Zoh9.png" alt="enter image description here"></a></p> <p>Now we are able to compare the formulas for $g/h$, and by identification,</p> <p>$$b=1,c=0,d=-\frac13,e=0,f=\frac15,\cdots$$</p> <p>Then, noting that $y(0)=0$, we also have $a=0$ and we establish the famous Gregory series, $$y(x)=x-\frac{x^3}3+\frac{x^5}5-\frac{x^7}7+\cdots$$ which is illustrated in the next picture (to degree $11$). It develops the so-called arc tangent function:</p> <p><a href="https://i.stack.imgur.com/H75IC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H75IC.png" alt="enter image description here"></a></p> <p>Finally, we can observe that when $x=1$, the arc $y$ covers an eighth of the circumference, hence the Leibnitz formula</p> <p>$$\frac\pi4=1-\frac13+\frac15-\frac17+\cdots$$</p> <hr> <p>The attentive reader may have noticed a little twist in the explanation. While obtaining the formula for $1/(1+x^2)$, we assumed that $x^2&lt;1$, but later used $x=1$. For reasons that deserve many more explanations, the "polynomial" expression for $y(x)$ is still valid for $x=1$.</p> Answer 34: <p>If you have covered $$\arctan x=x-\dfrac{x^3}{3}+\dfrac{x^5}{5}-\dfrac{x^7}{7}+\cdots+(-1)^{n-1}\dfrac{x^{2n-1}}{2n-1}+\cdots\quad |x|\le 1$$ then, for $x=1$ $$\frac{\pi}{4}=\arctan 1=1-\dfrac{1}{3}+\dfrac{1}{5}-\dfrac{1}{7}+\cdots+(-1)^{n-1}\dfrac{1}{2n-1}+\cdots$$ $$$$</p>
Is The Union of Intervals an Interval or not?
2,092,585
8
https://math.stackexchange.com/questions/2092585/is-the-union-of-intervals-an-interval-or-not
<p>I have a question on union of intervals.</p> <p>My teacher says the union of intervals is not an interval.</p> <p>Is this always true? </p> <p>I mean</p> <p>$[0,2] \cup [4,5]$ is not an interval, because $[0,2] \cap [4,5]= \varnothing ,$</p> <p>but what about</p> <p>$ (0,8) \cup (7,9).$</p> <p>I think this is an interval because $(0,8) \cup (7,9)=(0,9)$.</p>
[ { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" }, { "answer_id": 2092592, "score": 26, "text": "<p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement \"The union of two intervals is an interval\" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p>\n" }, { "answer_id": 2092590, "score": 8, "text": "<p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p>\n" }, { "answer_id": 2093276, "score": 7, "text": "<p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p>\n\n<h1>Your teacher's intended meaning.</h1>\n\n<p>\"The union of [any two] intervals is not [necessarily] an interval\". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p>\n\n<p>In other words, there is counterexample to the false statement \"For all intervals $X$ and $Y$, $X \\cup Y$ is also an interval.\"</p>\n\n<h1>Your interpretation</h1>\n\n<p>\"The union of [two particular] intervals is not [ever] an interval\". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p>\n" }, { "answer_id": 2092588, "score": 4, "text": "<p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>\n" } ]
Question: Is The Union of Intervals an Interval or not? Answer 1: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 2: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 3: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 4: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 5: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 6: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 7: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 8: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 9: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 10: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 11: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 12: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 13: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 14: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 15: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 16: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 17: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 18: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 19: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 20: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 21: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 22: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 23: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 24: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 25: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 26: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 27: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 28: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 29: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 30: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 31: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 32: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 33: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 34: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 35: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 36: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 37: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 38: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 39: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 40: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 41: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 42: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 43: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 44: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 45: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 46: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 47: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 48: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 49: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 50: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 51: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 52: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 53: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 54: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 55: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 56: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 57: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 58: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 59: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 60: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 61: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 62: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 63: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 64: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p> Answer 65: <p>For a statement to be true in mathematics, it must <em>always</em> be true without exception. The statement "The union of two intervals is an interval" is false because even though, sometimes, the union of two intervals is an interval (as in your second example), there are counterexamples. Since the statement is not always true (as in your first example), we say that it is false.</p> Answer 66: <p>The union of intervals is <strong>not always</strong> an interval. Sometimes it is, sometimes it isn't. Your second example is in fact an interval, while your first example is not.</p> Answer 67: <p>The problem is that you and your teacher have different interpretations of an ambiguous English statement as a precise mathematical statement.</p> <h1>Your teacher's intended meaning.</h1> <p>"The union of [any two] intervals is not [necessarily] an interval". This statement is easily proved true by providing two disjoint intervals whose union is not an interval, such as [0,2] and [4,5]. </p> <p>In other words, there is counterexample to the false statement "For all intervals $X$ and $Y$, $X \cup Y$ is also an interval."</p> <h1>Your interpretation</h1> <p>"The union of [two particular] intervals is not [ever] an interval". This statement is easily proved false providing an example of two overlapping intervals whose union <em>is</em> an interval, such as (0,8) and (7,9).</p> Answer 68: <p>The union of intervals is sometimes, but not always, an interval. So your teacher is right.</p>
Shouldn&#39;t the harmonic series converge?
2,093,530
8
https://math.stackexchange.com/questions/2093530/shouldnt-the-harmonic-series-converge
<p>If a sequence converges in a metric space, it is Cauchy, and in $\mathbb{R}^k$ every Cauchy sequence converges. <br> Therefore, in $\mathbb{R}^k$ a sequence converges iff it is Cauchy. <br> Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where each $s_n=\sum_{k=1}^na_k$. Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ For a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$. If $n=m$ then the statement reduces to: <br> <strong>A series converges if and only if</strong> $$|a_n| &lt; \epsilon $$ <strong>For a given $\epsilon &gt;0$ and an integer $N$ such that $N\le n$.</strong> <br> This clearly cannot be (e.g Harmonic series). When does the equivalence become an implication.</p>
[ { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" }, { "answer_id": 2093544, "score": 30, "text": "<p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \\ge N$, $|\\sum_{k=m}^{n} a_k| &lt; \\epsilon$. Notice the \"for all $m,n$.\" You cannot just let $m=n$ and verify the condition for that case.</p>\n" }, { "answer_id": 2093547, "score": 13, "text": "<p>It is true that a sequence in $\\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\\mathbb{R}$ converges if and only if the sequence of partials sums $\\{S_k\\}_{k=0}^\\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\\{x_k\\}_{k=0}^\\infty$ it reads that for every $\\epsilon &gt;0$ there exists $K \\ge 0$ such that if $m,n \\ge K$, then $|x_n - x_m | &lt; \\epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \\ge K$. Note, though, that it is possible to restrict to $m \\ge n \\ge K$ in the definition.</p>\n\n<p>Let's look at this for the harmonic series. In this case $S_k = \\sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \\ge \\ell$ to get \n$$\nS_k - S_\\ell = \\sum_{m=\\ell+1}^k \\frac{1}{m},\n$$<br>\nand it is these partial sums of arbitrary length that cannot be made small.</p>\n" }, { "answer_id": 2093543, "score": 5, "text": "<p>If $\\epsilon&gt;0$ and $m\\in\\mathbb{N}$ then there is an $n\\in\\mathbb{N}$, $n&gt;m$ such that</p>\n\n<p>$$ \\sum_{k=m}^n\\frac{1}{k}&gt;\\epsilon $$</p>\n" }, { "answer_id": 2093574, "score": 4, "text": "<blockquote>\n <p>Therefore, by the above, every series converges iff $$\\left | \\sum_{k=m}^n a_k\\right| &lt;\\epsilon$$ for a given $\\epsilon &gt;0$ and an integer $N$ such that $N\\le m\\le n$.</p>\n</blockquote>\n\n<p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p>\n\n<p>Here's what you should have said. </p>\n\n<blockquote>\n <p>The following two statements are equivalent:</p>\n \n <ol>\n <li>The series $\\sum_{k=1}^\\infty a_k$ converges.</li>\n <li>If $\\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \\leq n$, then $\\left | \\sum_{k=m}^n a_k \\right| &lt; \\epsilon$.</li>\n </ol>\n</blockquote>\n" }, { "answer_id": 2093660, "score": 2, "text": "<p>It is true that a series converges if and only if the following is true:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\left\\lvert \\sum_{k=m}^n a_k\\right\\rvert &lt; \\epsilon$$\n whenever $N\\leq m \\leq n$.</p>\n</blockquote>\n\n<p>The statement in the box above implies this one:</p>\n\n<blockquote>\n <p>For any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>The reason is that the statement in the first box says what happens\nfor all $m,n$ such that $N\\leq m \\leq n$,\n<strong>including</strong> the pairs $m,n$ in which $m=n$.</p>\n\n<p>On the other hand, the statement in the second box does <em>not</em> tell you\nwhat happens for all $m,n$ such that $N\\leq m \\leq n$,\nonly what happens in a small subset of those cases.\nTherefore you cannot use the statement in the second box to conclude\nthe statement in the first box.</p>\n\n<p>A correct statement using the second box is,</p>\n\n<blockquote>\n <p>A series $\\{a_n\\}$ converges <strong>only if</strong> for any $\\epsilon &gt; 0$, there is some $N$ such that\n $$\\lvert a_n \\rvert &lt; \\epsilon$$\n whenever $N\\leq n$.</p>\n</blockquote>\n\n<p>What you did was to write \"if and only if\" when you should\nhave written \"only if\".</p>\n" }, { "answer_id": 2093546, "score": 1, "text": "<p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p>\n\n<p>Let $\\{s_n\\}$ be a sequence in $\\mathbb{R}$ where $s_n=\\sum_{k=1}^na_k$. Then the series converges iff $\\textit{for all } \\epsilon &gt; 0$ $\\textit{there exists } N\\in\\mathbb{N}$ such that $\\textit{for all } n\\geq m &gt; N$ we have that \n$$\\left|\\sum_m^na_k\\right| &lt; \\epsilon$$</p>\n\n<p>This does not hold for the harmonic series since $\\sum_{k=N}^\\infty\\frac{1}{k}=\\infty for all N$</p>\n" } ]
Question: Shouldn&#39;t the harmonic series converge? Answer 1: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 2: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 3: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 4: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 5: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 6: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 7: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 8: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 9: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 10: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 11: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 12: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 13: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 14: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 15: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 16: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 17: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 18: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 19: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 20: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 21: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 22: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 23: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 24: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 25: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 26: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 27: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 28: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 29: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 30: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 31: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 32: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 33: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 34: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 35: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 36: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 37: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 38: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 39: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 40: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 41: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 42: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 43: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 44: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 45: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 46: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 47: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 48: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 49: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 50: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 51: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 52: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 53: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 54: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 55: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 56: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 57: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 58: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 59: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 60: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 61: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 62: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 63: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 64: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 65: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 66: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 67: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 68: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 69: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 70: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 71: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 72: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 73: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 74: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 75: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 76: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 77: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 78: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 79: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 80: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 81: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 82: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 83: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 84: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 85: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 86: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 87: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 88: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 89: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 90: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 91: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 92: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 93: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 94: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 95: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 96: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p> Answer 97: <p>In the definition of a Cauchy sequence, you have not gotten the order of the quantifiers correct. The partial sums of a series are Cauchy iff for all $\epsilon &gt; 0$ there exists an $N$ such that for all $m,n \ge N$, $|\sum_{k=m}^{n} a_k| &lt; \epsilon$. Notice the "for all $m,n$." You cannot just let $m=n$ and verify the condition for that case.</p> Answer 98: <p>It is true that a sequence in $\mathbb{R}$ converges if and only if it is Cauchy, and as such a series in $\mathbb{R}$ converges if and only if the sequence of partials sums $\{S_k\}_{k=0}^\infty$ is Cauchy. The problem is that you are not using the definition of Cauchy correctly. For a sequence $\{x_k\}_{k=0}^\infty$ it reads that for every $\epsilon &gt;0$ there exists $K \ge 0$ such that if $m,n \ge K$, then $|x_n - x_m | &lt; \epsilon$. Here one <em>does not</em> get to restrict to $n = m+1$, one must check all possible values of $m,n \ge K$. Note, though, that it is possible to restrict to $m \ge n \ge K$ in the definition.</p> <p>Let's look at this for the harmonic series. In this case $S_k = \sum_{m=1}^k 1/m$. You are looking at $S_{k+1} - S_k = 1/(m+1)$, which indeed can be made arbitrarily small. In fact, to use the Cauchy definition we must consider arbitrary $k \ge \ell$ to get $$ S_k - S_\ell = \sum_{m=\ell+1}^k \frac{1}{m}, $$<br> and it is these partial sums of arbitrary length that cannot be made small.</p> Answer 99: <p>If $\epsilon&gt;0$ and $m\in\mathbb{N}$ then there is an $n\in\mathbb{N}$, $n&gt;m$ such that</p> <p>$$ \sum_{k=m}^n\frac{1}{k}&gt;\epsilon $$</p> Answer 100: <blockquote> <p>Therefore, by the above, every series converges iff $$\left | \sum_{k=m}^n a_k\right| &lt;\epsilon$$ for a given $\epsilon &gt;0$ and an integer $N$ such that $N\le m\le n$.</p> </blockquote> <p>What are $m$ and $n$ in this statement? They have not been introduced. This statement doesn't make sense.</p> <p>Here's what you should have said. </p> <blockquote> <p>The following two statements are equivalent:</p> <ol> <li>The series $\sum_{k=1}^\infty a_k$ converges.</li> <li>If $\epsilon &gt; 0$, then there exists a positive integer $N$ such that if $m$ and $n$ are positive integers and $N &lt; m \leq n$, then $\left | \sum_{k=m}^n a_k \right| &lt; \epsilon$.</li> </ol> </blockquote> Answer 101: <p>It is true that a series converges if and only if the following is true:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\left\lvert \sum_{k=m}^n a_k\right\rvert &lt; \epsilon$$ whenever $N\leq m \leq n$.</p> </blockquote> <p>The statement in the box above implies this one:</p> <blockquote> <p>For any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>The reason is that the statement in the first box says what happens for all $m,n$ such that $N\leq m \leq n$, <strong>including</strong> the pairs $m,n$ in which $m=n$.</p> <p>On the other hand, the statement in the second box does <em>not</em> tell you what happens for all $m,n$ such that $N\leq m \leq n$, only what happens in a small subset of those cases. Therefore you cannot use the statement in the second box to conclude the statement in the first box.</p> <p>A correct statement using the second box is,</p> <blockquote> <p>A series $\{a_n\}$ converges <strong>only if</strong> for any $\epsilon &gt; 0$, there is some $N$ such that $$\lvert a_n \rvert &lt; \epsilon$$ whenever $N\leq n$.</p> </blockquote> <p>What you did was to write "if and only if" when you should have written "only if".</p> Answer 102: <p>I think you're confusion is with the quantifiers. Your initial statements are correct, but to put everything in more specific terms the correct statement, with correct quantifiers is:</p> <p>Let $\{s_n\}$ be a sequence in $\mathbb{R}$ where $s_n=\sum_{k=1}^na_k$. Then the series converges iff $\textit{for all } \epsilon &gt; 0$ $\textit{there exists } N\in\mathbb{N}$ such that $\textit{for all } n\geq m &gt; N$ we have that $$\left|\sum_m^na_k\right| &lt; \epsilon$$</p> <p>This does not hold for the harmonic series since $\sum_{k=N}^\infty\frac{1}{k}=\infty for all N$</p>
Surreal arithmetic with $\frac{1}{2}\omega$
2,093,559
8
https://math.stackexchange.com/questions/2093559/surreal-arithmetic-with-frac12-omega
<p>In the final two chapters of Knuth's <em>Surreal Numbers</em>, both the world of multiplication and infinite/infinitesimal numbers are introduced. The basic ideas of both of these make sense to me, but I'm having two difficulties with the arithmetic of $\frac{1}{2}\omega$ that I'm afraid may reveal a deeper misunderstanding on my part.</p> <p>1) Multiplication between two surreal numbers $xy$ creates: $$ \langle (X_Ly+xY_L-X_LY_L)\cup (X_{R}y+xY_{R}-X_RY_R) | (X_Ly+xY_R-X_LY_R)\cup (X_Ry+xY_L-X_RY_L)\rangle $$ Later, the value for $\frac{1}{2}x$ is given as: $$ \langle\frac{1}{2}X_L\cup(x-\frac{1}{2}X_R)|(x-\frac{1}{2}X_L)\cup\frac{1}{2}X_R\rangle $$ which follows directly. I'm thrown a bit, however, when $\frac{1}{2}\omega$ is calculated to be: $$ \langle\{1,2,3,4,...\}|\{\omega-1,\omega-2,\omega-3,\omega-4,...\}\rangle $$ Since $\frac{1}{2}\equiv\langle0|1\rangle$ and $\omega\equiv\langle\{1,2,3,4,...\}|\emptyset\rangle$ it seems like the left component of their multiplication should be $$ \frac{1}{2}\{1,2,3,4,...\}\cup\emptyset $$ $$ \{\frac{1}{2},1,\frac{3}{2},2,...\} $$ and that the right should be $$ \{\omega-\frac{1}{2},\omega-1,\omega-\frac{3}{2},\omega-2,...\} $$ Is Knuth just making the intuitive leap that $\{\frac{1}{2},1,\frac{3}{2},2,...\}\equiv\{1,2,3,4,...\}$ or am I missing something crucial about multiplying $\frac{1}{2}$ through an infinite set? I know that isn't a huge leap, and I'm perfectly fine with making it, but I suspect this leads to my more fundamental confusion about:</p> <p>2) Intuitively it seems obvious that $\frac{1}{2}\omega+\frac{1}{2}\omega\equiv\omega$, but, even when I accept the representation of $\frac{1}{2}\omega$ discussed above in 1), I don't see how it works. It seems to me that surreal addition of $\frac{1}{2}\omega$ to itself gives: $$ \langle\{(1+\frac{1}{2}\omega),(2+\frac{1}{2}\omega),(3+\frac{1}{2}\omega),...\}|\{(\omega-1+\frac{1}{2}\omega),(\omega-2+\frac{1}{2}\omega),...\}\rangle $$ which I think reduces to: $$ \langle\{(1+\frac{1}{2}\omega),(2+\frac{1}{2}\omega),(3+\frac{1}{2}\omega),...\}|\{(\frac{3}{2}\omega-1),(\frac{3}{2}\omega-2),(\frac{3}{2}\omega-3),...\}\rangle $$ and I don't see at all how any of that could be $\equiv\langle\{1,2,3,4,...\}|\emptyset\rangle$. What am I missing?</p>
[ { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" }, { "answer_id": 2093759, "score": 5, "text": "<p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p>\n\n<p>For example:</p>\n\n<p>The Number $$\\langle 0, 1, 2, 3 | \\rangle$$ is equal to the number $$\\langle 3 | \\rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p>\n\n<p>By the same token, one can remove $\\frac12$, $\\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p>\n" }, { "answer_id": 2557556, "score": 4, "text": "<p>Pat Muchmore had the right idea. For simplicity I will use $\\psi:=\\frac{1}{2}\\omega$. You calculated $$\\psi+\\psi=\\langle\\{1+\\psi,2+\\psi,3+\\psi,\\ldots\\}|\\{\\psi+\\omega-1,\\psi+\\omega-2,\\psi+\\omega-3,\\ldots\\}\\rangle$$ and want to show $\\psi+\\psi=\\omega$. For this, it suffices to show $\\psi+\\psi\\leq\\omega$ and $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>By definition$^1$, $x\\geq y$ iff there aren't any $v\\in X^R,w\\in Y^L$ with $v\\leq y$ or $x\\leq w$. </p>\n\n<p>Now for any $n\\in\\mathbb{N}$ we have $\\psi+\\omega-n&gt;\\omega$, so there is no $x\\in(\\psi+\\psi)^R$ with $x\\leq\\omega$ (note that $\\psi+\\psi, \\omega\\notin (\\psi+\\psi)^R$). On the other hand, $\\psi&gt;n$, then $\\psi+\\psi&gt;n$ and so there is no $x\\in\\omega^L$ with $\\psi+\\psi\\leq x$. We conclude $\\psi+\\psi\\geq\\omega$.</p>\n\n<p>There isn't any $x\\in\\omega^R$ with $x\\leq\\psi+\\psi$ because $\\omega^R=\\{\\}$. And for any $n\\in\\mathbb{N}$ we have $\\omega&gt;n+\\psi$, therefore there is no $x\\in(\\psi+\\psi)^L$ with $\\omega\\leq x$. Therefore $\\omega\\geq\\psi+\\psi$ and we conclude $$\\frac{1}{2}\\omega+\\frac{1}{2}\\omega=\\omega$$</p>\n\n<p>$^1$ I'm working with \"On numbers and games\" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\\leq$ is somehow equivalent defined there.</p>\n" } ]
Question: Surreal arithmetic with $\frac{1}{2}\omega$ Answer 1: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 2: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 3: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 4: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 5: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 6: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 7: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 8: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 9: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 10: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 11: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 12: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 13: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 14: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 15: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 16: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 17: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 18: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 19: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 20: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 21: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 22: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 23: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 24: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 25: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 26: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 27: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 28: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 29: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 30: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p> Answer 31: <p>Your calculation looks right. What you are overlooking is a comment in the earlier chapter which discusses various tricks that help one calculate values. Conway notes that one can often remove elements from Left and/or Right which do not affect the value. </p> <p>For example:</p> <p>The Number $$\langle 0, 1, 2, 3 | \rangle$$ is equal to the number $$\langle 3 | \rangle$$ because $0$, $1$, and $2$ are to the left of $3$ and don't affect the value (which is 'between Left and Right').</p> <p>By the same token, one can remove $\frac12$, $\frac32$ ,$2$, etc. from Left above because there is always an integer greater than the removed ones to force the remainder to be 'to the left of the value'; similarly for Right above.</p> Answer 32: <p>Pat Muchmore had the right idea. For simplicity I will use $\psi:=\frac{1}{2}\omega$. You calculated $$\psi+\psi=\langle\{1+\psi,2+\psi,3+\psi,\ldots\}|\{\psi+\omega-1,\psi+\omega-2,\psi+\omega-3,\ldots\}\rangle$$ and want to show $\psi+\psi=\omega$. For this, it suffices to show $\psi+\psi\leq\omega$ and $\psi+\psi\geq\omega$.</p> <p>By definition$^1$, $x\geq y$ iff there aren't any $v\in X^R,w\in Y^L$ with $v\leq y$ or $x\leq w$. </p> <p>Now for any $n\in\mathbb{N}$ we have $\psi+\omega-n&gt;\omega$, so there is no $x\in(\psi+\psi)^R$ with $x\leq\omega$ (note that $\psi+\psi, \omega\notin (\psi+\psi)^R$). On the other hand, $\psi&gt;n$, then $\psi+\psi&gt;n$ and so there is no $x\in\omega^L$ with $\psi+\psi\leq x$. We conclude $\psi+\psi\geq\omega$.</p> <p>There isn't any $x\in\omega^R$ with $x\leq\psi+\psi$ because $\omega^R=\{\}$. And for any $n\in\mathbb{N}$ we have $\omega&gt;n+\psi$, therefore there is no $x\in(\psi+\psi)^L$ with $\omega\leq x$. Therefore $\omega\geq\psi+\psi$ and we conclude $$\frac{1}{2}\omega+\frac{1}{2}\omega=\omega$$</p> <p>$^1$ I'm working with "On numbers and games" from J.Conway, and don't have Knuth's book at hand, but I'm sure $\leq$ is somehow equivalent defined there.</p>
Why do we need rationalisation?
2,093,721
8
https://math.stackexchange.com/questions/2093721/why-do-we-need-rationalisation
<p>While solving a problem I came to an answer $\frac{1}{\sqrt{3}+1}$.</p> <p>But this was not the answer. </p> <p>The answer was $\frac{\sqrt{3}-1}{2}$ which comes on rationalisation of my answer.</p> <p>Then I divided both results and get the same decimal value.</p> <p>Then why do we need rationalisation if it doesn't change decimal value?</p>
[ { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" }, { "answer_id": 2093725, "score": 15, "text": "<p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of \"standard representation\" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p>\n" }, { "answer_id": 2094004, "score": 3, "text": "<p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p>\n\n<p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p>\n\n<p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p>\n" }, { "answer_id": 2094122, "score": 3, "text": "<p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\\frac{\\sqrt{3}-1}{2}=\\frac{\\sqrt{3}}{2}-\\frac{1}{2}$, but there's no similar obvious way to split up $\\frac{1}{\\sqrt{3}+1}$.</p>\n\n<p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>\n" } ]
Question: Why do we need rationalisation? Answer 1: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 2: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 3: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 4: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 5: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 6: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 7: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 8: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 9: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 10: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 11: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 12: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 13: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 14: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 15: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 16: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 17: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 18: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 19: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 20: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 21: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 22: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 23: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 24: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 25: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 26: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 27: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 28: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 29: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 30: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 31: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 32: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 33: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 34: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 35: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 36: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 37: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 38: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 39: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 40: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 41: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 42: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 43: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 44: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 45: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p> Answer 46: <p>We don't need it. In fact your answer is correct. However, it is standard practice to rationalize those type of fractions. This makes it easier to compare result; it's like having some sort of "standard representation" of a number instead of several, much in the same way as when we write $1/3$ and not $27/81$</p> Answer 47: <p>In the pre-calculator days, it was a handy skill. If you need to compute $2/\sqrt{3}$ then you have to do long division of $2$ by $1.732.$ Ick. But if you rationalize the denominator, you get $2(1.732)/3$. It's much easier to divide by $3$ than $1.732.$</p> <p>But even in these days were you can just ask Siri for the answer, it's still a handy skill. It's easier to compare sizes of numbers if the denominators are rational. Some integrands are easier to integrate if the radicals are moved to the numerator. And probably some other things I can't think of. </p> <p>So asking students in pre-calc to put their answers in this standard form is preparing them for later stuff.</p> Answer 48: <p>It's generally easier to deal with fractions where the numerator is complicated; for instance, $\frac{\sqrt{3}-1}{2}=\frac{\sqrt{3}}{2}-\frac{1}{2}$, but there's no similar obvious way to split up $\frac{1}{\sqrt{3}+1}$.</p> <p>As a result, it's often more useful to rationalize. Before calculators, when it was hard to check if two expressions represented the same number, it was common to <em>always</em> rationalize. These days it's less common to insist on it, but being able to do it when needed is important: situations like that come up a lot in calculus, and rationalizing at the right time can make a problem much simpler, or even be the difference between a problem looking unsolvable, and turning it into something easy. (Of course, in calculus the fractions usually have variables, so it's harder for a calculator to check if they're equal.)</p>
Approximating square roots using binomial expansion.
2,093,811
8
https://math.stackexchange.com/questions/2093811/approximating-square-roots-using-binomial-expansion
<p><strong>Through the binomial expansion of $(1 - 2x)^\frac{1}{2}$, I am required to find an approximation of $\sqrt2$.</strong></p> <blockquote> <p><strong>Binomial expansion</strong></p> <p>$ (1 + x)^n = 1 + \frac{n}{1}x + \frac{n(n-1)}{1*2}x^2 + ... $</p> </blockquote> <p>Thus, the expansion of $(1 - 2x)^\frac{1}{2}$: $$ = 1 - x -\frac{1}{2}x^2 - \frac{1}{2}x^3 + ... $$</p> <p>The suggested way, is to choose a value for $x$ so that $(1-2x)$ has the form <em>$2*$'a perfect square'</em>. This can be done by taking $x = 0.01$. Thus, $(1 - 2x)=(1-2*0.01) = 0.98 = 2*0.7^2$</p> <p>And $$ (1 - 2x)^\frac{1}{2} = 0.98^\frac{1}{2} = 0.7\sqrt2$$ Which is equal to the previously established expansion, so we can now go ahead and find $\sqrt2$. The problem I am facing, is that there was no mention of how the value of $x=0.01$ was arrived at.</p> <p><strong>Is there an easy way to determine an appropriate value for $x$?</strong></p>
[ { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" }, { "answer_id": 3198245, "score": 8, "text": "<blockquote>\n <p>We want to (manually) approximate <span class=\"math-container\">$\\sqrt{2}$</span> by using the first few terms of the binomial series expansion of \n <span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;= \\sum_{n=0}^\\infty \\binom{\\frac{1}{2}}{n}(-2x)^n\\qquad\\qquad\\qquad\\qquad |x|&lt;\\frac{1}{2}\\\\\n&amp;= 1-x-\\frac{1}{2}x^2-\\frac{1}{2}x^3+\\cdots\\tag{1}\n\\end{align*}</span>\n Here we look for a way to determine appropriate values of <span class=\"math-container\">$x$</span> using the binomial expansion.</p>\n</blockquote>\n\n<p>In order to apply (1) we are looking for a number <span class=\"math-container\">$y$</span> with\n<span class=\"math-container\">\\begin{align*}\n\\sqrt{1-2x}&amp;=\\sqrt{2y^2}=y\\sqrt{2}\\tag{2}\\\\\n\\color{blue}{\\sqrt{2}}&amp;\\color{blue}{=\\frac{1}{y}\\sqrt{1-2x}}\n\\end{align*}</span></p>\n\n<p>We see it is convenient to choose <span class=\"math-container\">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2)\n<span class=\"math-container\">\\begin{align*}\n1-2x&amp;=2y^2\\\\\n\\color{blue}{x}&amp;\\color{blue}{=\\frac{1}{2}-y^2}\\tag{3}\n\\end{align*}</span></p>\n\n<blockquote>\n <p>When looking for an appropriate <span class=\"math-container\">$y$</span> which fulfills (3) there are some aspects to consider:</p>\n \n <ul>\n <li><p>We have to respect the radius of convergence <span class=\"math-container\">$|x|&lt;\\frac{1}{2}$</span>.</p></li>\n <li><p>Since we want to calculate an approximation of <span class=\"math-container\">$\\sqrt{2}$</span> by hand we should take <span class=\"math-container\">$y\\in\\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li>\n <li><p>Last but not least: We want to find a value <span class=\"math-container\">$x$</span> which provides a good approximation for <span class=\"math-container\">$\\sqrt{2}$</span>.</p></li>\n </ul>\n \n <p>We will see it's not hard to find values which have these properties.</p>\n</blockquote>\n\n<p>We see in (1) a good approximation is given if <span class=\"math-container\">$x$</span> is close to <span class=\"math-container\">$0$</span>. If <span class=\"math-container\">$x$</span> is close to zero we will also fulfill the convergence condition. <span class=\"math-container\">$x$</span> close to zero means that in (3) we have to choose <span class=\"math-container\">$y$</span> so that\n<span class=\"math-container\">$\ny^2\n$</span> \nis close to <span class=\"math-container\">$\\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class=\"math-container\">$a,b$</span> so that<br>\n<span class=\"math-container\">\\begin{align*}\ny^2=\\frac{a^2}{b^2}\\approx \\frac{1}{2}\n\\end{align*}</span>\nThis can be done easily. When going through small numbers of <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> whose squares are apart by a factor <span class=\"math-container\">$2$</span> we might quickly come to <span class=\"math-container\">$100$</span> and <span class=\"math-container\">$49$</span>. These are two small squares and we have <span class=\"math-container\">$2\\cdot 49=98$</span> close to <span class=\"math-container\">$100$</span>. That's all.</p>\n\n<blockquote>\n <p>Now it's time to harvest. We choose <span class=\"math-container\">$y^2=\\frac{49}{100}$</span> resp. <span class=\"math-container\">$\\color{blue}{y=\\frac{7}{10}}$</span>. We obtain for <span class=\"math-container\">$x$</span> from (3)\n <span class=\"math-container\">\\begin{align*}\nx=\\frac{1}{2}-y^2=\\frac{1}{2}-\\frac{49}{100}=\\frac{1}{100}\n\\end{align*}</span>\n We have now a nice value <span class=\"math-container\">$\\color{blue}{x=\\frac{1}{100}}$</span> and we finally get from (1) the approximation:\n <span class=\"math-container\">\\begin{align*}\n\\color{blue}{\\sqrt{2}} \\approx \\frac{10}{7}\\left(1- 10^{-2}-\\frac{1}{2}\\cdot 10^{-4}-\\frac{1}{2}\\cdot 10^{-6}\\right)\\color{blue}{=1.414\\,213\\,5}71\\ldots\n\\end{align*}</span></p>\n \n <p>We have <span class=\"math-container\">$\\color{blue}{\\sqrt{2}=1.414\\,213\\,5}62\\ldots$</span> with an approximation error <span class=\"math-container\">$\\approx 9.055\\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p>\n</blockquote>\n\n<p><em>Note:</em> In a section about binomial series expansion in <em><a href=\"https://rads.stackoverflow.com/amzn/click/com/014014739X\" rel=\"noreferrer\" rel=\"nofollow noreferrer\">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p>\n" }, { "answer_id": 2093850, "score": 5, "text": "<p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p>\n\n<h2>Method 1:</h2>\n\n<p>$$\\sqrt2=\\sqrt{1+1}=1+\\frac12-\\frac18+\\dots\\approx1+\\frac12-\\frac18=\\frac{11}8=1.375$$</p>\n\n<p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p>\n\n<p>$$1.375^2=1.890625$$</p>\n\n<p>Obviously it approaches the correct value as you take more terms.</p>\n\n<hr>\n\n<h2>Method 2:</h2>\n\n<p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p>\n\n<p>$$x=\\sqrt2\\implies x^2=2$$</p>\n\n<p>$$2x^2=2+x^2$$</p>\n\n<p>Divide both sides by $2x$ and we get</p>\n\n<p>$$x=\\frac{2+x^2}{2x}$$</p>\n\n<p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p>\n\n<p>$$x_{n+1}=\\frac{2+(x_n)^2}{2x_n}$$</p>\n\n<p>and with $x_0\\approx\\sqrt2$, we will then have $x=\\lim_{n\\to\\infty}x_n$. For example, with $x_0=1$,</p>\n\n<p>$x_0=1$</p>\n\n<p>$x_1=\\frac{2+1^2}{2(1)}=\\frac32=1.5$</p>\n\n<p>$x_2=\\frac{2+(3/2)^2}{2(3/2)}=\\frac{17}{12}=1.41666\\dots$</p>\n\n<p>$x_3=\\dots=\\frac{577}{408}=1.414215686$</p>\n\n<p>And one can quickly check that $(x_3)^2=2.000006007\\dots$, which is pretty much the square root of $2$.</p>\n" }, { "answer_id": 4741401, "score": 0, "text": "<p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p>\n<p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p>\n<p>200 = 14² + 4</p>\n<p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p>\n<p><span class=\"math-container\">$$\\sqrt N = a(1 + \\frac b {a^2} ) =a[1 + \\frac b {2a^2} - \\frac {b^2} {8a^4} + \\frac {b^3} {16a^6} - ... ] = a + \\frac b {2a} - \\frac {b^2} {8a^3} + ...$$</span></p>\n<p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p>\n<p><span class=\"math-container\">$$ \\sqrt 200 = 14\\frac 1 7 $$</span></p>\n<p>That is <span class=\"math-container\">$ \\sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class=\"math-container\">$\\sqrt 200 = 14 + \\frac 1 7 - \\frac 1 {1372}$</span>. This gives the superb result:</p>\n<p><span class=\"math-container\">$ \\sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p>\n<p><strong>APPENDIX - Another example of</strong> <span class=\"math-container\">$\\sqrt{13}$</span></p>\n<p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class=\"math-container\">$\\sqrt{1300} = 36\\frac 1 {18} = $</span>. This gives:</p>\n<p><span class=\"math-container\">$\\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p>\n" }, { "answer_id": 2728435, "score": -1, "text": "<p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p>\n\n<p>So for example $\\sqrt{2} = {(1+1)^{1/2}}$ </p>\n\n<p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p>\n\n<p>$1+(0.5)\\cdot 1= 1.5 $\nWhich is not too bad approximation for $\\sqrt{2} = 1.4142135624 $</p>\n" } ]
Question: Approximating square roots using binomial expansion. Answer 1: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 2: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 3: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 4: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 5: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 6: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 7: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 8: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 9: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 10: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 11: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 12: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 13: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 14: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 15: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 16: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 17: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 18: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 19: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 20: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 21: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 22: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 23: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 24: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 25: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 26: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 27: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 28: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 29: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 30: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 31: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 32: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 33: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 34: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 35: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 36: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 37: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 38: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 39: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 40: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 41: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 42: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 43: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 44: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 45: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 46: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 47: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 48: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 49: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 50: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 51: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 52: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 53: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 54: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 55: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 56: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 57: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 58: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 59: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 60: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p> Answer 61: <blockquote> <p>We want to (manually) approximate <span class="math-container">$\sqrt{2}$</span> by using the first few terms of the binomial series expansion of <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;= \sum_{n=0}^\infty \binom{\frac{1}{2}}{n}(-2x)^n\qquad\qquad\qquad\qquad |x|&lt;\frac{1}{2}\\ &amp;= 1-x-\frac{1}{2}x^2-\frac{1}{2}x^3+\cdots\tag{1} \end{align*}</span> Here we look for a way to determine appropriate values of <span class="math-container">$x$</span> using the binomial expansion.</p> </blockquote> <p>In order to apply (1) we are looking for a number <span class="math-container">$y$</span> with <span class="math-container">\begin{align*} \sqrt{1-2x}&amp;=\sqrt{2y^2}=y\sqrt{2}\tag{2}\\ \color{blue}{\sqrt{2}}&amp;\color{blue}{=\frac{1}{y}\sqrt{1-2x}} \end{align*}</span></p> <p>We see it is convenient to choose <span class="math-container">$y$</span> to be a square number which can be easily factored out from the root. We obtain from (2) <span class="math-container">\begin{align*} 1-2x&amp;=2y^2\\ \color{blue}{x}&amp;\color{blue}{=\frac{1}{2}-y^2}\tag{3} \end{align*}</span></p> <blockquote> <p>When looking for an appropriate <span class="math-container">$y$</span> which fulfills (3) there are some aspects to consider:</p> <ul> <li><p>We have to respect the radius of convergence <span class="math-container">$|x|&lt;\frac{1}{2}$</span>.</p></li> <li><p>Since we want to calculate an approximation of <span class="math-container">$\sqrt{2}$</span> by hand we should take <span class="math-container">$y\in\mathbb{Q}$</span> with rather small numbers as numerator and denominator.</p></li> <li><p>Last but not least: We want to find a value <span class="math-container">$x$</span> which provides a good approximation for <span class="math-container">$\sqrt{2}$</span>.</p></li> </ul> <p>We will see it's not hard to find values which have these properties.</p> </blockquote> <p>We see in (1) a good approximation is given if <span class="math-container">$x$</span> is close to <span class="math-container">$0$</span>. If <span class="math-container">$x$</span> is close to zero we will also fulfill the convergence condition. <span class="math-container">$x$</span> close to zero means that in (3) we have to choose <span class="math-container">$y$</span> so that <span class="math-container">$ y^2 $</span> is close to <span class="math-container">$\frac{1}{2}$</span>. We have already (1) and (3) appropriately considered. Now we want to find small natural numbers <span class="math-container">$a,b$</span> so that<br> <span class="math-container">\begin{align*} y^2=\frac{a^2}{b^2}\approx \frac{1}{2} \end{align*}</span> This can be done easily. When going through small numbers of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> whose squares are apart by a factor <span class="math-container">$2$</span> we might quickly come to <span class="math-container">$100$</span> and <span class="math-container">$49$</span>. These are two small squares and we have <span class="math-container">$2\cdot 49=98$</span> close to <span class="math-container">$100$</span>. That's all.</p> <blockquote> <p>Now it's time to harvest. We choose <span class="math-container">$y^2=\frac{49}{100}$</span> resp. <span class="math-container">$\color{blue}{y=\frac{7}{10}}$</span>. We obtain for <span class="math-container">$x$</span> from (3) <span class="math-container">\begin{align*} x=\frac{1}{2}-y^2=\frac{1}{2}-\frac{49}{100}=\frac{1}{100} \end{align*}</span> We have now a nice value <span class="math-container">$\color{blue}{x=\frac{1}{100}}$</span> and we finally get from (1) the approximation: <span class="math-container">\begin{align*} \color{blue}{\sqrt{2}} \approx \frac{10}{7}\left(1- 10^{-2}-\frac{1}{2}\cdot 10^{-4}-\frac{1}{2}\cdot 10^{-6}\right)\color{blue}{=1.414\,213\,5}71\ldots \end{align*}</span></p> <p>We have <span class="math-container">$\color{blue}{\sqrt{2}=1.414\,213\,5}62\ldots$</span> with an approximation error <span class="math-container">$\approx 9.055\times 10^{-9}$</span>. This result is quite impressive when considering that we have used just four terms of the binomial series.</p> </blockquote> <p><em>Note:</em> In a section about binomial series expansion in <em><a href="https://rads.stackoverflow.com/amzn/click/com/014014739X" rel="noreferrer" rel="nofollow noreferrer">Journey through Genius</a></em> by W. Dunham the author cites Newton: <em>Extraction of roots are much shortened by this theorem</em>, indicating how valuable this technique was for Newton.</p> Answer 62: <p>Personally, I wouldn't have done it that way. So here is how I would've done it:</p> <h2>Method 1:</h2> <p>$$\sqrt2=\sqrt{1+1}=1+\frac12-\frac18+\dots\approx1+\frac12-\frac18=\frac{11}8=1.375$$</p> <p>which is much clearer to me, since it avoids having to take decimals raised to powers and gives you something you can easily do by hand.</p> <p>$$1.375^2=1.890625$$</p> <p>Obviously it approaches the correct value as you take more terms.</p> <hr> <h2>Method 2:</h2> <p>This is called fixed-point iteration/Newton's method, and it basically goes like this:</p> <p>$$x=\sqrt2\implies x^2=2$$</p> <p>$$2x^2=2+x^2$$</p> <p>Divide both sides by $2x$ and we get</p> <p>$$x=\frac{2+x^2}{2x}$$</p> <p>Now, interestingly, I'm going to call the $x$'s on the left $x_{n+1}$ and the $x$'s on the right $x_n$, so</p> <p>$$x_{n+1}=\frac{2+(x_n)^2}{2x_n}$$</p> <p>and with $x_0\approx\sqrt2$, we will then have $x=\lim_{n\to\infty}x_n$. For example, with $x_0=1$,</p> <p>$x_0=1$</p> <p>$x_1=\frac{2+1^2}{2(1)}=\frac32=1.5$</p> <p>$x_2=\frac{2+(3/2)^2}{2(3/2)}=\frac{17}{12}=1.41666\dots$</p> <p>$x_3=\dots=\frac{577}{408}=1.414215686$</p> <p>And one can quickly check that $(x_3)^2=2.000006007\dots$, which is pretty much the square root of $2$.</p> Answer 63: <p>It is often easier to begin with a slightly different problem to simplify slightly. Stay with me and you will see why.</p> <p>Instead of finding the square root of 2, look at the square root of 200 and note that:</p> <p>200 = 14² + 4</p> <p>This is part of a general approach that says if N = a² + b, where b &lt;&lt; a, then we may write:</p> <p><span class="math-container">$$\sqrt N = a(1 + \frac b {a^2} ) =a[1 + \frac b {2a^2} - \frac {b^2} {8a^4} + \frac {b^3} {16a^6} - ... ] = a + \frac b {2a} - \frac {b^2} {8a^3} + ...$$</span></p> <p>Now, if we use N = 200, a = 14, b = 4, we get the first two terms give:</p> <p><span class="math-container">$$ \sqrt 200 = 14\frac 1 7 $$</span></p> <p>That is <span class="math-container">$ \sqrt 2 = 1.4142857...$</span> which is correct to 5 significant figures. By including the next term, we get <span class="math-container">$\sqrt 200 = 14 + \frac 1 7 - \frac 1 {1372}$</span>. This gives the superb result:</p> <p><span class="math-container">$ \sqrt 2 = 1.414212827...$</span>, which is correct to 6 significant figures.</p> <p><strong>APPENDIX - Another example of</strong> <span class="math-container">$\sqrt{13}$</span></p> <p>Using the same approach and only the first two terms, we note that 36² = 1296. Thus, <span class="math-container">$\sqrt{1300} = 36\frac 1 {18} = $</span>. This gives:</p> <p><span class="math-container">$\sqrt{13} = 3.60555 $</span>, correct to 6 significant figures.</p> Answer 64: <p>In fact you can take any two numbers which can be added to get 2 (not nesserly 0.01 but at least you should know the root of one of them </p> <p>So for example $\sqrt{2} = {(1+1)^{1/2}}$ </p> <p>Know all what you need is to expand it using bio theorem and for 2 terms you ll get 1.5</p> <p>$1+(0.5)\cdot 1= 1.5 $ Which is not too bad approximation for $\sqrt{2} = 1.4142135624 $</p>
50% probability of seeing all cards
2,094,506
8
https://math.stackexchange.com/questions/2094506/50-probability-of-seeing-all-cards
<p>I have a deck of $20$ different cards. </p> <p>Each time, a card is pulled randomly, then put back in the deck. We shuffle and proceed to pull another card (thus keeping the randomness of pulls).</p> <p>The question is this:</p> <p>After how many pulls the probability of seeing all cards in the deck is $50$$\%$?</p>
[ { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" }, { "answer_id": 2094633, "score": 4, "text": "<p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \\le k \\le n$, and $d \\ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion:\n$$\np(k,d)=\\begin{cases}\n1, &amp; \\text{if $k=0$} \\\\\n0, &amp; \\text{if $0 \\le d &lt; k$}\\\\\n\\left(\\dfrac{k}{n}\\right)p(k-1,d-1)+\\left(\\dfrac{n-k}{n}\\right)p(k,d-1) &amp; \\text{otherwise}\n\\end{cases}\n$$</p>\n\n<p>Explanation:\n<p>\nIf $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$.\n<p>\nIf $0 \\le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0.\n<p>\nOtherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$.\n \n<p>\nSetting $n=20$ and applying the above recursion, Maple gets</p>\n\n<p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p>\n\n<p>so the answer is $67$.</p>\n" }, { "answer_id": 3560310, "score": 0, "text": "<p>This is the <a href=\"https://en.wikipedia.org/wiki/Coupon_collector%27s_problem\" rel=\"nofollow noreferrer\">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class=\"math-container\">$m$</span>cards after <span class=\"math-container\">$n$</span> attempts is</p>\n\n<p><span class=\"math-container\">$$\n\\def\\stir#1#2{\\left\\{#1\\atop#2\\right\\}}\n\\frac{m!}{m^n}\\stir nm\\;,\n$$</span></p>\n\n<p>where <span class=\"math-container\">$\\stir nm$</span> is a <a href=\"https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind\" rel=\"nofollow noreferrer\">Stirling number of the second kind</a>. (See <a href=\"https://math.stackexchange.com/questions/379525\">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class=\"math-container\">$n$</span> such that</p>\n\n<p><span class=\"math-container\">$$\n\\frac{20!}{{20}^n}\\stir n{20}\\ge\\frac12\\;.\n$$</span></p>\n\n<p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class=\"math-container\">$20H_{20}\\approx72$</span> (where <span class=\"math-container\">$H_{20}$</span> is the <span class=\"math-container\">$20$</span>th <a href=\"https://en.wikipedia.org/wiki/Harmonic_number\" rel=\"nofollow noreferrer\">harmonic number</a>). Then some trial and error shows that <span class=\"math-container\">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>\n" } ]
Question: 50% probability of seeing all cards Answer 1: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 2: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 3: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 4: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 5: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 6: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 7: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 8: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 9: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 10: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 11: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 12: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 13: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 14: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 15: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 16: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 17: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 18: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 19: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 20: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 21: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 22: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 23: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 24: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 25: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 26: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 27: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 28: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 29: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 30: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p> Answer 31: <p>Suppose a deck consists of $n$ cards $1,...,n$, where $n$ is a fixed positive integer. For $0 \le k \le n$, and $d \ge 0$, let $p(k,d)$ be the probability, with $k$ cards not yet seen in previous draws, of seeing all the unseen cards (each at least once) in the next $d$ draws, where each draw consists of randomly selecting one card with replacement from a shuffled deck. Then $p(k,d)$ can be computed via the following recursion: $$ p(k,d)=\begin{cases} 1, &amp; \text{if $k=0$} \\ 0, &amp; \text{if $0 \le d &lt; k$}\\ \left(\dfrac{k}{n}\right)p(k-1,d-1)+\left(\dfrac{n-k}{n}\right)p(k,d-1) &amp; \text{otherwise} \end{cases} $$</p> <p>Explanation: <p> If $k = 0$, there are no cards yet unseen, so the probability of seeing all the unseen cards is $1$. <p> If $0 \le d &lt; k$, the $d$ remaining draws are not enough to see the $k$ yet unseen cards, so the probability of seeing all the unseen cards is 0. <p> Otherwise, draw one card. If the drawn card is one of the $k$ unseen cards, that leaves $k-1$ unseen cards; if not, it's still $k$ unseen cards. Either way, the number of remaining draws goes down by $1$, from $d$ to $d-1$. <p> Setting $n=20$ and applying the above recursion, Maple gets</p> <p>$$p(20,66) &lt; 1/2 &lt; p(20,67)$$</p> <p>so the answer is $67$.</p> Answer 32: <p>This is the <a href="https://en.wikipedia.org/wiki/Coupon_collector%27s_problem" rel="nofollow noreferrer">coupon collector&rsquo;s problem</a>. The probability to have seen all <span class="math-container">$m$</span>cards after <span class="math-container">$n$</span> attempts is</p> <p><span class="math-container">$$ \def\stir#1#2{\left\{#1\atop#2\right\}} \frac{m!}{m^n}\stir nm\;, $$</span></p> <p>where <span class="math-container">$\stir nm$</span> is a <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a>. (See <a href="https://math.stackexchange.com/questions/379525">Probability distribution in the coupon collector&#39;s problem</a>.) Thus we want to find <span class="math-container">$n$</span> such that</p> <p><span class="math-container">$$ \frac{20!}{{20}^n}\stir n{20}\ge\frac12\;. $$</span></p> <p>We can take the expected number of attempts required to see all cards as an estimate. This is <span class="math-container">$20H_{20}\approx72$</span> (where <span class="math-container">$H_{20}$</span> is the <span class="math-container">$20$</span>th <a href="https://en.wikipedia.org/wiki/Harmonic_number" rel="nofollow noreferrer">harmonic number</a>). Then some trial and error shows that <span class="math-container">$n=67$</span> is the least value for which the above inequality is fulfilled.</p>
Very interesting integral limit
2,094,657
8
https://math.stackexchange.com/questions/2094657/very-interesting-integral-limit
<p>I found this interesting problem on AoPS forum but no one has posted an answer. I have no idea how to solve it.</p> <blockquote> <p>$$ \int_0^\infty \sin(x^n)\,dx $$ For all positive rationals $n&gt;1$, $I_n$ denotes the integral as above.</p> <p>If $P_n$ denotes the product $$ P_n=\prod_{r=1}^{n-1}I_{\bigl(\!\frac{n}{r}\!\bigr)}\,, $$ then evaluate the following limit $L$ $$ L=\lim_{n\to\infty}\bigl(\sqrt{n}\,P_n\bigr)^{\frac{1}{n}} $$</p> </blockquote>
[ { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" }, { "answer_id": 2094758, "score": 6, "text": "<p>since\n$$T_{\\frac{r}{n}}=\\int_{0}^{+\\infty}\\sin{(x^{\\frac{r}{n}})}dx=\\Gamma\\left(\\dfrac{r}{n}+1\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}},r=1,2,,\\cdots,n-1$$\nproof see:<a href=\"https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)\" rel=\"nofollow noreferrer\">3.1</a></p>\n\n<p><strong>Lemma $1$:</strong></p>\n\n<p>$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\dfrac{\\sqrt{n}}{2^{n-1}}$$\nUse this well known\n$$z^{n-1}+z^{n-2}+\\cdots+z+1=\\prod_{k=1}^{n-1}(z-(z-z_{k})$$\nlet $z=1$ then we have\n$$n=\\prod_{i=1}^{n-1}\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|$$\nsince\n$$\\left|1-\\cos{\\dfrac{2k\\pi}{n}}+i\\sin{\\dfrac{2k\\pi}{n}}\\right|=2\\sin{\\dfrac{k\\pi}{n}}$$\n$$\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{n}}=\\dfrac{n}{2^{n-1}}$$\nand\n$$\\dfrac{2n}{2^{2n-1}}=\\prod_{i=1}^{2n-1}\\sin{\\dfrac{i\\pi}{2n}}=\\left(\\prod_{i=1}^{n-1}\\sin{\\dfrac{i\\pi}{2n}}\\right)^2$$\nBy done!</p>\n\n<p><strong>Lemma 2:</strong> $$f=\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{2}{n}\\right)\\cdots\\Gamma\\left(\\dfrac{n-1}{n}\\right)=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$</p>\n\n<p>since\n$$f^2=\\left(\\Gamma\\left(\\dfrac{1}{n}\\right)\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\right)\\cdot \\left(\\Gamma\\left(\\dfrac{2}{n}\\right)\\Gamma\\left(\\dfrac{n-2}{n}\\right)\\right)\\cdots\\left(\\Gamma\\left(\\dfrac{n-1}{n}\\right)\\Gamma\\left(\\dfrac{1}{n}\\right)\\right)=\\dfrac{\\pi}{\\sin{\\frac{\\pi}{n}}}\\cdot\\dfrac{2\\pi}{\\sin{\\frac{2\\pi}{n}}}\\cdots\\dfrac{\\pi}{\\sin{\\dfrac{(n-1)\\pi}{n}}}$$\nso we have \n$$f=\\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}$$\nso we have\n$$P_{n}=\\prod_{r=1}^{n-1}T_{\\frac{r}{n}}=\\prod_{r=1}^{n-1}\\dfrac{r}{n}\\Gamma\\left(\\dfrac{r}{n}\\right)\\sin{\\dfrac{\\pi r}{2n}}=\\dfrac{(n-1)!}{n^{n-1}}\\cdot\\dfrac{\\sqrt{n}}{2^{n-1}}\\cdot \\dfrac{(2\\pi)^{\\frac{n-1}{2}}}{\\sqrt{n}}=\\left(\\dfrac{\\sqrt{2\\pi}}{2n}\\right)^{n-1}(n-1)!$$\nBy stirling formula \n we have\n$$\\lim_{n\\to+\\infty}(\\sqrt{n}P_{n})^{\\frac{1}{n}}=\\dfrac{\\sqrt{2\\pi}}{2e}$$</p>\n" }, { "answer_id": 2094925, "score": 4, "text": "<p>The integral $I(a)=\\int_0^{\\infty}\\sin(x^a)=\\Im\\int_0^{\\infty}\\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\\frac{\\pi}{2a}$ to get $I(a)=\\Im\\left(e^{\\frac{i\\pi}{2a}}\\int_0^{\\infty}e^{-x^a} \\right)$ or </p>\n\n<blockquote>\n <p>$$\nI(a)=\\frac{1}{a}\\Gamma\\left(\\frac{1}{a}\\right)\\sin\\left(\\frac{\\pi}{2a}\\right)\n$$</p>\n</blockquote>\n\n<p>Now let us have a look at </p>\n\n<p>$$\nP_n=\\left(n^{\\frac12}\\prod_{r=1}^{n-1}I\\left(\\frac{n}r\\right)\\right)^{\\frac1n}=n^{\\frac1{2n}}\\exp\\left(\\frac1n\\underbrace{\\sum_{r=1}^{n-1}\\log\\left(I\\left(\\frac{n}{r}\\right)\\right)}_{s_{n-1}}\\right)\n$$</p>\n\n<p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p>\n\n<p>$$\n\\frac{1}{n}s_{n-1}=\\frac{1}{n}s_n=\\frac{1}{n}\\sum_{r=1}^{n}\\log\\left(\\frac{r}{n}\\sin\\left(\\frac{\\pi r}{2n}\\right)\\Gamma\\left(\\frac{r}{n}\\right)\\right)\n$$</p>\n\n<p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\\frac{1}{2n}}\\sim_{\\infty}1$) we can split the limit</p>\n\n<blockquote>\n <p>$$\nL=\\lim_{n\\rightarrow\\infty}P_n=\\lim_{n\\rightarrow\\infty}n^{\\frac{1}{2n}}\\lim_{n\\rightarrow\\infty}e^{\\frac{1}{n}s_n}=1\\cdot e^J \\,\\,\\quad \\left(\\star\\right)\n$$</p>\n</blockquote>\n\n<p>where $J$ is given by</p>\n\n<p>$$\nJ=\\int_0^1\\log(I(x))dx=\\\\\\int_0^1\\log(x)dx+\\int_0^1\\log\\left(\\sin\\left(\\frac{\\pi }{2}x\\right)\\right)dx+\\int_0^1\\log\\left(\\Gamma(x)\\right)dx\n$$</p>\n\n<p>or </p>\n\n<p>$$\nJ=-1-\\log(2)+\\frac{1}{2}\\log(2\\pi)\n$$</p>\n\n<p>where we used a classic result by <a href=\"https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula\">Raabe</a> as well as <a href=\"https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x\">this all time favourite</a>. </p>\n\n<p>We now can conclude from $\\left(\\star\\right)$ that</p>\n\n<blockquote>\n <p>$$\nL=\\frac{\\sqrt{2\\pi}}{2e}\n$$</p>\n</blockquote>\n\n<p>in accordance with @math110</p>\n" } ]
Question: Very interesting integral limit Answer 1: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 2: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 3: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 4: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 5: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 6: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 7: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 8: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 9: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 10: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 11: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 12: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 13: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 14: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 15: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 16: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 17: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 18: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 19: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 20: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 21: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 22: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 23: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 24: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 25: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 26: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 27: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 28: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 29: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 30: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p> Answer 31: <p>since $$T_{\frac{r}{n}}=\int_{0}^{+\infty}\sin{(x^{\frac{r}{n}})}dx=\Gamma\left(\dfrac{r}{n}+1\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}},r=1,2,,\cdots,n-1$$ proof see:<a href="https://de.wikibooks.org/wiki/Formelsammlung_Mathematik:_Bestimmte_Integrale:_Form_R(x,sin)" rel="nofollow noreferrer">3.1</a></p> <p><strong>Lemma $1$:</strong></p> <p>$$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}=\dfrac{\sqrt{n}}{2^{n-1}}$$ Use this well known $$z^{n-1}+z^{n-2}+\cdots+z+1=\prod_{k=1}^{n-1}(z-(z-z_{k})$$ let $z=1$ then we have $$n=\prod_{i=1}^{n-1}\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|$$ since $$\left|1-\cos{\dfrac{2k\pi}{n}}+i\sin{\dfrac{2k\pi}{n}}\right|=2\sin{\dfrac{k\pi}{n}}$$ $$\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{n}}=\dfrac{n}{2^{n-1}}$$ and $$\dfrac{2n}{2^{2n-1}}=\prod_{i=1}^{2n-1}\sin{\dfrac{i\pi}{2n}}=\left(\prod_{i=1}^{n-1}\sin{\dfrac{i\pi}{2n}}\right)^2$$ By done!</p> <p><strong>Lemma 2:</strong> $$f=\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{2}{n}\right)\cdots\Gamma\left(\dfrac{n-1}{n}\right)=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$</p> <p>since $$f^2=\left(\Gamma\left(\dfrac{1}{n}\right)\Gamma\left(\dfrac{n-1}{n}\right)\right)\cdot \left(\Gamma\left(\dfrac{2}{n}\right)\Gamma\left(\dfrac{n-2}{n}\right)\right)\cdots\left(\Gamma\left(\dfrac{n-1}{n}\right)\Gamma\left(\dfrac{1}{n}\right)\right)=\dfrac{\pi}{\sin{\frac{\pi}{n}}}\cdot\dfrac{2\pi}{\sin{\frac{2\pi}{n}}}\cdots\dfrac{\pi}{\sin{\dfrac{(n-1)\pi}{n}}}$$ so we have $$f=\dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}$$ so we have $$P_{n}=\prod_{r=1}^{n-1}T_{\frac{r}{n}}=\prod_{r=1}^{n-1}\dfrac{r}{n}\Gamma\left(\dfrac{r}{n}\right)\sin{\dfrac{\pi r}{2n}}=\dfrac{(n-1)!}{n^{n-1}}\cdot\dfrac{\sqrt{n}}{2^{n-1}}\cdot \dfrac{(2\pi)^{\frac{n-1}{2}}}{\sqrt{n}}=\left(\dfrac{\sqrt{2\pi}}{2n}\right)^{n-1}(n-1)!$$ By stirling formula we have $$\lim_{n\to+\infty}(\sqrt{n}P_{n})^{\frac{1}{n}}=\dfrac{\sqrt{2\pi}}{2e}$$</p> Answer 32: <p>The integral $I(a)=\int_0^{\infty}\sin(x^a)=\Im\int_0^{\infty}\exp(ix^a)$ is easily calulated by using the analyticality of the integrand: Rotate the the contour of integration by an angle of $\frac{\pi}{2a}$ to get $I(a)=\Im\left(e^{\frac{i\pi}{2a}}\int_0^{\infty}e^{-x^a} \right)$ or </p> <blockquote> <p>$$ I(a)=\frac{1}{a}\Gamma\left(\frac{1}{a}\right)\sin\left(\frac{\pi}{2a}\right) $$</p> </blockquote> <p>Now let us have a look at </p> <p>$$ P_n=\left(n^{\frac12}\prod_{r=1}^{n-1}I\left(\frac{n}r\right)\right)^{\frac1n}=n^{\frac1{2n}}\exp\left(\frac1n\underbrace{\sum_{r=1}^{n-1}\log\left(I\left(\frac{n}{r}\right)\right)}_{s_{n-1}}\right) $$</p> <p>Now we observe that $I(1)=0$ so $s_n=s_{n-1}$ so </p> <p>$$ \frac{1}{n}s_{n-1}=\frac{1}{n}s_n=\frac{1}{n}\sum_{r=1}^{n}\log\left(\frac{r}{n}\sin\left(\frac{\pi r}{2n}\right)\Gamma\left(\frac{r}{n}\right)\right) $$</p> <p>which is the defintion of a Riemannian integral. Since also the prefactor in $P_n$ is finite ($n^{\frac{1}{2n}}\sim_{\infty}1$) we can split the limit</p> <blockquote> <p>$$ L=\lim_{n\rightarrow\infty}P_n=\lim_{n\rightarrow\infty}n^{\frac{1}{2n}}\lim_{n\rightarrow\infty}e^{\frac{1}{n}s_n}=1\cdot e^J \,\,\quad \left(\star\right) $$</p> </blockquote> <p>where $J$ is given by</p> <p>$$ J=\int_0^1\log(I(x))dx=\\\int_0^1\log(x)dx+\int_0^1\log\left(\sin\left(\frac{\pi }{2}x\right)\right)dx+\int_0^1\log\left(\Gamma(x)\right)dx $$</p> <p>or </p> <p>$$ J=-1-\log(2)+\frac{1}{2}\log(2\pi) $$</p> <p>where we used a classic result by <a href="https://math.stackexchange.com/questions/1338801/how-to-prove-raabes-formula">Raabe</a> as well as <a href="https://math.stackexchange.com/questions/37829/computing-the-integral-of-log-sin-x">this all time favourite</a>. </p> <p>We now can conclude from $\left(\star\right)$ that</p> <blockquote> <p>$$ L=\frac{\sqrt{2\pi}}{2e} $$</p> </blockquote> <p>in accordance with @math110</p>
Intersection of Set A and a set containing empty set
2,095,609
8
https://math.stackexchange.com/questions/2095609/intersection-of-set-a-and-a-set-containing-empty-set
<p>Give set $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, is $\{\emptyset\} \cap A = \{\emptyset\}$ or just $\emptyset$? I understand that $\emptyset \cap A = \emptyset$ (and why this is true), but I'm thrown off by the repeated nesting of empty sets within sets.</p> <p>Given that, I am under the impression that $\emptyset \cup A = A =\{\emptyset\} \cup A$. Is this also true?</p>
[ { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" }, { "answer_id": 2095616, "score": 5, "text": "<p>In general,</p>\n\n<p>$$\\emptyset \\cup S = S \\not=\\{\\emptyset\\} \\cup S$$</p>\n\n<p>Note that in our case, $A = \\{\\emptyset, \\{\\emptyset\\} , \\{\\{\\emptyset\\}\\}\\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\\emptyset $ is just a set with nothing. $B = \\{\\emptyset\\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p>\n\n<p>$$A = \\{\\emptyset, B, \\{B\\}\\} $$</p>\n\n<p>And thus $A \\cap B = B = \\{\\emptyset\\}$ given that $B \\subset A $.</p>\n\n<p>Also, for the $A $ given, $A \\cup B = A $ <strong>only because</strong> $B \\subset A$. There are plenty of sets $S $ where $S \\cup B \\not= S$.</p>\n" }, { "answer_id": 2095669, "score": 4, "text": "<p>\"Nothing exists\" is one (tongue in cheek) way of thinking about <a href=\"https://en.wikipedia.org/wiki/Axiom_of_empty_set\" rel=\"nofollow noreferrer\">the axiom of the empty set</a>.</p>\n\n<p>The empty set $\\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\\exists \\emptyset : \\forall x, x \\notin \\emptyset$.</p>\n\n<p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\\{x\\}$ whose only element is $x$, and so on. $\\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p>\n\n<p>The union of two sets $A$ and $B$ is a set with the property $\\forall x: (x \\in A \\cup B \\iff x \\in A \\lor x \\in B)$. So thinking about $\\emptyset \\cup A$, we are talking about a set:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in \\emptyset \\lor x \\in A)$$</p>\n\n<p>But by <strong>definition</strong>, it can never be true that $x \\in \\emptyset$; so the above boils down to:</p>\n\n<p>$$\\forall x: (x \\in \\emptyset \\cup A \\iff x \\in A)$$</p>\n\n<p>or in other words (see <a href=\"https://en.wikipedia.org/wiki/Axiom_of_extensionality\" rel=\"nofollow noreferrer\">axiom of extensionality</a>); $\\emptyset \\cup A = A$. </p>\n\n<p>On the other hand there does exist some set $x$ s.t. $x \\in \\{\\emptyset\\}$; namely, the set $x = \\emptyset$. So when we consider $\\{\\emptyset\\} \\cup A$,we are talking about the set:</p>\n\n<p>$$\\forall x: (x \\in \\{\\emptyset\\} \\cup A \\iff x \\in \\{\\emptyset\\} \\lor x \\in A)$$</p>\n\n<p>Thus $\\{\\emptyset\\} \\cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\\emptyset \\in A$, but we do have $\\emptyset \\in \\{\\emptyset\\}$; and therefore $\\emptyset \\in \\{\\emptyset\\} \\cup A$.</p>\n" }, { "answer_id": 2095632, "score": 2, "text": "<p>Note that $\\emptyset=${ } which contains no element while the set $X=${$\\emptyset$} is non-empty containing an element $\\emptyset$.</p>\n\n<p>{$\\emptyset$}$\\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\\subset A$</p>\n" }, { "answer_id": 2095971, "score": 1, "text": "<p>I think you are confused between subsets and members of a set.\n$\\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\\emptyset$ as a member and the set$\\{\\emptyset\\}$ also has $\\emptyset$ as a member, as they have only one member in common $A \\cap \\{\\emptyset\\}=\\{ \\emptyset\\}$ because $A \\cap B$ is the set of all common members of A and B.</p>\n\n<p>Let $A \\cap {\\emptyset}=B$ then B is the set of all common members in set A and ${\\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \\cap {\\emptyset}=B={\\emptyset}$.\n yes $\\emptyset \\cup A = A =\\{\\emptyset\\} \\cup A$ is a true statement but it's not true for any set A in general.</p>\n" }, { "answer_id": 2095830, "score": 0, "text": "<p>Let $A=\\{0,1,2\\}$ and $B=\\{0\\}$. Now your question is (up to renaming symbols) if $A\\cap B=B$ or $A\\cap B =\\emptyset$, which I think you can answer with ease.</p>\n" } ]
Question: Intersection of Set A and a set containing empty set Answer 1: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 2: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 3: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 4: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 5: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 6: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 7: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 8: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 9: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 10: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 11: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 12: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 13: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 14: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 15: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 16: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 17: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 18: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 19: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 20: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 21: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 22: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 23: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 24: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 25: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 26: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 27: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 28: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 29: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 30: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 31: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 32: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 33: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 34: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 35: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 36: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 37: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 38: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 39: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 40: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 41: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 42: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 43: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 44: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 45: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 46: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 47: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 48: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 49: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 50: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 51: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 52: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 53: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 54: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 55: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 56: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 57: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 58: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 59: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 60: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 61: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 62: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 63: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 64: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 65: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 66: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 67: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 68: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 69: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 70: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 71: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 72: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 73: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 74: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 75: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p> Answer 76: <p>In general,</p> <p>$$\emptyset \cup S = S \not=\{\emptyset\} \cup S$$</p> <p>Note that in our case, $A = \{\emptyset, \{\emptyset\} , \{\{\emptyset\}\}\}$, $A $ is a <em>set</em> of <strong>sets</strong> and $\emptyset $ is just a set with nothing. $B = \{\emptyset\}$ is also a set of sets, but happens to only contain the empty set. Then we have</p> <p>$$A = \{\emptyset, B, \{B\}\} $$</p> <p>And thus $A \cap B = B = \{\emptyset\}$ given that $B \subset A $.</p> <p>Also, for the $A $ given, $A \cup B = A $ <strong>only because</strong> $B \subset A$. There are plenty of sets $S $ where $S \cup B \not= S$.</p> Answer 77: <p>"Nothing exists" is one (tongue in cheek) way of thinking about <a href="https://en.wikipedia.org/wiki/Axiom_of_empty_set" rel="nofollow noreferrer">the axiom of the empty set</a>.</p> <p>The empty set $\emptyset$ exists by fiat; which is to say (either directly as an axiom or derived from other axioms), $\exists \emptyset : \forall x, x \notin \emptyset$.</p> <p>Once we have <strong>any</strong> set $x$, we have rules by which we can prove the existence of other sets; such as the set $\{x\}$ whose only element is $x$, and so on. $\emptyset$ is no different in this respect; it is a set, and we have rules that show how to construct other sets from sets whose existence we have already proven.</p> <p>The union of two sets $A$ and $B$ is a set with the property $\forall x: (x \in A \cup B \iff x \in A \lor x \in B)$. So thinking about $\emptyset \cup A$, we are talking about a set:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in \emptyset \lor x \in A)$$</p> <p>But by <strong>definition</strong>, it can never be true that $x \in \emptyset$; so the above boils down to:</p> <p>$$\forall x: (x \in \emptyset \cup A \iff x \in A)$$</p> <p>or in other words (see <a href="https://en.wikipedia.org/wiki/Axiom_of_extensionality" rel="nofollow noreferrer">axiom of extensionality</a>); $\emptyset \cup A = A$. </p> <p>On the other hand there does exist some set $x$ s.t. $x \in \{\emptyset\}$; namely, the set $x = \emptyset$. So when we consider $\{\emptyset\} \cup A$,we are talking about the set:</p> <p>$$\forall x: (x \in \{\emptyset\} \cup A \iff x \in \{\emptyset\} \lor x \in A)$$</p> <p>Thus $\{\emptyset\} \cup A$ is not necessarily equal to $A$; because (depending on $A$) we might not have $\emptyset \in A$, but we do have $\emptyset \in \{\emptyset\}$; and therefore $\emptyset \in \{\emptyset\} \cup A$.</p> Answer 78: <p>Note that $\emptyset=${ } which contains no element while the set $X=${$\emptyset$} is non-empty containing an element $\emptyset$.</p> <p>{$\emptyset$}$\cap A$ represents the intersection of $X$ with $A$ which is clearly the set $X$ itself as $X\subset A$</p> Answer 79: <p>I think you are confused between subsets and members of a set. $\emptyset$ is a subset of every set but it is not necessarily a member of every set but in your question set A has $\emptyset$ as a member and the set$\{\emptyset\}$ also has $\emptyset$ as a member, as they have only one member in common $A \cap \{\emptyset\}=\{ \emptyset\}$ because $A \cap B$ is the set of all common members of A and B.</p> <p>Let $A \cap {\emptyset}=B$ then B is the set of all common members in set A and ${\emptyset}$ as they have no common members their intersection B is the empty set, hence $A \cap {\emptyset}=B={\emptyset}$. yes $\emptyset \cup A = A =\{\emptyset\} \cup A$ is a true statement but it's not true for any set A in general.</p> Answer 80: <p>Let $A=\{0,1,2\}$ and $B=\{0\}$. Now your question is (up to renaming symbols) if $A\cap B=B$ or $A\cap B =\emptyset$, which I think you can answer with ease.</p>
Limit at infinity of cubic roots and square roots without using conjugate $\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}$
2,096,027
8
https://math.stackexchange.com/questions/2096027/limit-at-infinity-of-cubic-roots-and-square-roots-without-using-conjugate-lim
<blockquote> <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}} $$</p> </blockquote> <p>How would you proceed to find this limit, by eyeballing I would guess it foes to zero since the numerator has a smaller power than the denominator, normaly I would use the binomial theorem if I had something like $$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}-1}{\sqrt{x+3}-1} $$ But here I don't know how to find the limit since I can't really use the binomial theorem.</p>
[ { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" }, { "answer_id": 2096029, "score": 14, "text": "<p>$$\\lim_{x \\to \\infty} \\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\lim_{x \\to \\infty} \\sqrt[6]{\\dfrac{(x+2)^2}{(x+3)^3}}=\\sqrt[6]{0}=0$$</p>\n" }, { "answer_id": 2096037, "score": 13, "text": "<p>Simply use <em>equivalents</em>: $\\;\\sqrt[3]{x+2}\\sim_\\infty \\sqrt[3]{x}$, $\\;\\sqrt{x+3}\\sim_\\infty \\sqrt{x}$, hence\n$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}\\sim_\\infty \\frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\\xrightarrow[x\\to\\infty]{}0.$$</p>\n" }, { "answer_id": 2096031, "score": 8, "text": "<p>If you factorize you get \n$$\\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \\frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$\nI'll let you do the limit yourself.</p>\n" }, { "answer_id": 2096030, "score": 5, "text": "<p>For $x \\ge 2$ we have</p>\n\n<p>$0 \\le \\frac{^3\\sqrt{x+2}}{\\sqrt{x + 3}} \\le \\frac{\\sqrt[3]{2x}}{\\sqrt{x}}$.</p>\n\n<p>Your turn!</p>\n" }, { "answer_id": 2096039, "score": 4, "text": "<p>A slightly longer way: use Generalized Binomial coefficients: \n$$\nx^{-\\frac{1}{6}}\\frac{(1+\\frac{2}{x})^{\\frac{1}{3}}}{(1+\\frac{3}{x})^{\\frac{1}{2}}} \\sim x^{-\\frac{1}{6}}\\frac{1+\\frac{2}{x} + O(x^{-2})}{1+\\frac{3}{x} + O(x^{-2}) } \\to_x 0\n$$</p>\n" }, { "answer_id": 2096033, "score": 2, "text": "<p><strong>Hint</strong>:</p>\n\n<p>$$\\frac{\\sqrt[3]{x+2}}{\\sqrt{x+3}}=\\frac{\\sqrt[3]x}{\\sqrt x}\\frac{\\sqrt[3]{1+\\frac2x}}{\\sqrt{1+\\frac3x}}.$$</p>\n" }, { "answer_id": 2096923, "score": -1, "text": "<p>Hint: $x +2 &lt; x + 3$.</p>\n\n<p>(Yes, really, you can solve it using this.)</p>\n" } ]
Question: Limit at infinity of cubic roots and square roots without using conjugate $\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}$ Answer 1: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 2: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 3: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 4: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 5: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 6: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 7: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 8: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 9: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 10: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 11: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 12: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 13: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 14: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 15: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 16: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 17: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 18: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 19: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 20: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 21: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 22: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 23: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 24: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 25: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 26: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 27: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 28: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 29: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 30: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 31: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 32: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 33: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 34: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 35: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 36: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 37: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 38: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 39: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 40: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 41: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 42: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 43: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 44: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 45: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 46: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 47: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 48: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 49: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 50: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 51: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 52: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 53: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 54: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 55: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 56: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 57: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 58: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 59: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 60: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 61: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 62: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 63: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 64: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 65: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 66: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 67: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 68: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 69: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 70: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 71: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 72: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 73: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 74: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 75: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 76: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 77: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 78: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 79: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 80: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 81: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 82: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 83: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 84: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 85: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 86: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 87: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 88: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 89: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 90: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 91: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 92: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 93: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 94: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 95: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 96: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 97: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 98: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 99: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 100: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 101: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 102: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 103: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 104: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 105: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p> Answer 106: <p>$$\lim_{x \to \infty} \frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\lim_{x \to \infty} \sqrt[6]{\dfrac{(x+2)^2}{(x+3)^3}}=\sqrt[6]{0}=0$$</p> Answer 107: <p>Simply use <em>equivalents</em>: $\;\sqrt[3]{x+2}\sim_\infty \sqrt[3]{x}$, $\;\sqrt{x+3}\sim_\infty \sqrt{x}$, hence $$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}\sim_\infty \frac{x^{1/3}}{x^{1/2}}=x^{-1/6}\xrightarrow[x\to\infty]{}0.$$</p> Answer 108: <p>If you factorize you get $$\frac{x^{1/3}(1+2/x)^{1/3}}{x^{1/2}(1+3/x)^{1/2}} = \frac{(1+2/x)^{1/3}}{x^{1/6}(1+3/x)^{1/2}} $$ I'll let you do the limit yourself.</p> Answer 109: <p>For $x \ge 2$ we have</p> <p>$0 \le \frac{^3\sqrt{x+2}}{\sqrt{x + 3}} \le \frac{\sqrt[3]{2x}}{\sqrt{x}}$.</p> <p>Your turn!</p> Answer 110: <p>A slightly longer way: use Generalized Binomial coefficients: $$ x^{-\frac{1}{6}}\frac{(1+\frac{2}{x})^{\frac{1}{3}}}{(1+\frac{3}{x})^{\frac{1}{2}}} \sim x^{-\frac{1}{6}}\frac{1+\frac{2}{x} + O(x^{-2})}{1+\frac{3}{x} + O(x^{-2}) } \to_x 0 $$</p> Answer 111: <p><strong>Hint</strong>:</p> <p>$$\frac{\sqrt[3]{x+2}}{\sqrt{x+3}}=\frac{\sqrt[3]x}{\sqrt x}\frac{\sqrt[3]{1+\frac2x}}{\sqrt{1+\frac3x}}.$$</p> Answer 112: <p>Hint: $x +2 &lt; x + 3$.</p> <p>(Yes, really, you can solve it using this.)</p>
Prove that the elements of the sequence are integers
2,096,310
8
https://math.stackexchange.com/questions/2096310/prove-that-the-elements-of-the-sequence-are-integers
<p>A sequence is given by the following recursion. Prove that the elements of this sequence are integers!</p> <p>$a_0=1$</p> <p>$a_1=41$</p> <p>$a_{n+2}=3a_n+\sqrt{8(a_n^2+a_{n+1}^2)}$</p>
[ { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" }, { "answer_id": 2099083, "score": 8, "text": "<p><strong>Sequence is Integral</strong></p>\n\n<p>Suppose\n$$\na_{n+2}=3a_n+b_n\\tag{1}\n$$\nwhere\n$$\nb_n^2=8\\!\\left(a_n^2+a_{n+1}^2\\right)\\tag{2}\n$$\nThen\n$$\n\\begin{align}\nb_{n+1}^2\n&amp;=8\\!\\left(\\color{#C00}{a_{n+2}^2}+\\color{#090}{a_{n+1}^2}\\right)\\\\\n&amp;=8\\!\\left(\\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\\color{#090}{\\frac{b_n^2}8-a_n^2}\\right)\\\\\n&amp;=64a_n^2+48a_nb_n+9b_n^2\\\\[6pt]\n&amp;=(8a_n+3b_n)^2\\tag{3}\n\\end{align}\n$$\nTherefore,\n$$\nb_{n+1}=8a_n+3b_n\\tag{4}\n$$\nWe can continue the integer sequences using $(1)$ and $(4)$:\n$$\n\\begin{bmatrix}a_{n+1}\\\\a_{n+2}\\\\b_{n+1}\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\\tag{5}\n$$\nThat is,\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}0&amp;1&amp;0\\\\3&amp;0&amp;1\\\\8&amp;0&amp;3\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}1\\\\41\\\\116\\end{bmatrix}\\tag{6}\n$$</p>\n\n<hr>\n\n<p><strong>Closed Form for the Sequence</strong></p>\n\n<p>Using the <a href=\"https://en.wikipedia.org/wiki/Jordan_normal_form\" rel=\"nofollow noreferrer\">Jordan Normal Form</a> of the matrix in $(6)$, we get\n$$\n\\begin{bmatrix}a_n\\\\a_{n+1}\\\\b_n\\end{bmatrix}\n=\n\\begin{bmatrix}\n-1&amp;\\frac{-1-\\sqrt3}8&amp;\\frac{-1+\\sqrt3}8\\\\\n1&amp;\\frac{1-\\sqrt3}8&amp;\\frac{1+\\sqrt3}8\\\\\n2&amp;1&amp;1\n\\end{bmatrix}\n\\begin{bmatrix}\n-1&amp;0&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;2-\\sqrt3&amp;0\\vphantom{\\frac{\\sqrt3}8}\\\\\n0&amp;0&amp;2+\\sqrt3\n\\end{bmatrix}^{\\,\\large n}\n\\begin{bmatrix}\n\\frac{22}3\\\\\n\\frac{152-84\\sqrt3}3\\\\\n\\frac{152+84\\sqrt3}3\n\\end{bmatrix}\\tag{7}\n$$\nfrom which, we can extract\n$$\na_n=\\frac{25+17\\sqrt3}6\\left(2+\\sqrt3\\right)^n-\\frac{22}3(-1)^n+\\frac{25-17\\sqrt3}6\\left(2-\\sqrt3\\right)^n\\tag{8}\n$$\nIt is not immediately evident that $(8)$ is always an integer; however, since we have $\\left(x-2-\\sqrt3\\right)\\left(x-2+\\sqrt3\\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that\n$$\na_n=3a_{n-1}+3a_{n-2}-a_{n-3}\\tag{9}\n$$\nwhich does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p>\n" }, { "answer_id": 2096330, "score": 6, "text": "<p><strong>Hint:</strong></p>\n\n<p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p>\n\n<p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p>\n\n<p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p>\n\n<p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>\n" } ]
Question: Prove that the elements of the sequence are integers Answer 1: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 2: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 3: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 4: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 5: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 6: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 7: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 8: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 9: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 10: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 11: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 12: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 13: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 14: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 15: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 16: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 17: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 18: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 19: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 20: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 21: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 22: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 23: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 24: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 25: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 26: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 27: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 28: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 29: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 30: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p> Answer 31: <p><strong>Sequence is Integral</strong></p> <p>Suppose $$ a_{n+2}=3a_n+b_n\tag{1} $$ where $$ b_n^2=8\!\left(a_n^2+a_{n+1}^2\right)\tag{2} $$ Then $$ \begin{align} b_{n+1}^2 &amp;=8\!\left(\color{#C00}{a_{n+2}^2}+\color{#090}{a_{n+1}^2}\right)\\ &amp;=8\!\left(\color{#C00}{9a_n^2+6a_nb_n+b_n^2}+\color{#090}{\frac{b_n^2}8-a_n^2}\right)\\ &amp;=64a_n^2+48a_nb_n+9b_n^2\\[6pt] &amp;=(8a_n+3b_n)^2\tag{3} \end{align} $$ Therefore, $$ b_{n+1}=8a_n+3b_n\tag{4} $$ We can continue the integer sequences using $(1)$ and $(4)$: $$ \begin{bmatrix}a_{n+1}\\a_{n+2}\\b_{n+1}\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix} \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix}\tag{5} $$ That is, $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix}0&amp;1&amp;0\\3&amp;0&amp;1\\8&amp;0&amp;3\end{bmatrix}^{\,\large n} \begin{bmatrix}1\\41\\116\end{bmatrix}\tag{6} $$</p> <hr> <p><strong>Closed Form for the Sequence</strong></p> <p>Using the <a href="https://en.wikipedia.org/wiki/Jordan_normal_form" rel="nofollow noreferrer">Jordan Normal Form</a> of the matrix in $(6)$, we get $$ \begin{bmatrix}a_n\\a_{n+1}\\b_n\end{bmatrix} = \begin{bmatrix} -1&amp;\frac{-1-\sqrt3}8&amp;\frac{-1+\sqrt3}8\\ 1&amp;\frac{1-\sqrt3}8&amp;\frac{1+\sqrt3}8\\ 2&amp;1&amp;1 \end{bmatrix} \begin{bmatrix} -1&amp;0&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;2-\sqrt3&amp;0\vphantom{\frac{\sqrt3}8}\\ 0&amp;0&amp;2+\sqrt3 \end{bmatrix}^{\,\large n} \begin{bmatrix} \frac{22}3\\ \frac{152-84\sqrt3}3\\ \frac{152+84\sqrt3}3 \end{bmatrix}\tag{7} $$ from which, we can extract $$ a_n=\frac{25+17\sqrt3}6\left(2+\sqrt3\right)^n-\frac{22}3(-1)^n+\frac{25-17\sqrt3}6\left(2-\sqrt3\right)^n\tag{8} $$ It is not immediately evident that $(8)$ is always an integer; however, since we have $\left(x-2-\sqrt3\right)\left(x-2+\sqrt3\right)(x+1)=x^3-3x^2-3x+1$, equation $(8)$ implies that $$ a_n=3a_{n-1}+3a_{n-2}-a_{n-3}\tag{9} $$ which does imply that $a_n$ is always an integer since $a_0=1$, $a_1=41$, and $a_2=119$.</p> Answer 32: <p><strong>Hint:</strong></p> <p>Clearly, induction is likely the way to go here. But, what should your hypothesis be?</p> <p>It isn't enough to assume that the previous two terms are integers. So, let's look at what it means for $a_{n+2}$ to be an integer.</p> <p>Clearly, if $a_n$ is an integer, then $3a_n$ is as well. So, you need to consider the square root. In order for that to be an integer, it must be the case that $8(a_n^2+a_{n+1}^2)$ is a perfect square. Since $8=2^3$, the combination of this condition with the assumption that $a_n$ and $a_{n+1}$ must be integers is equivalent to saying that $a_n^2+a_{n+1}^2=2m^2$ for some $m$. (Can you see why?)</p> <p>So, I'd suggest trying to prove by induction the slightly stronger statement that for all $n$, $a_n$ is an integer AND that for each $n$, you can find an integer $m$ so that $a_n^2+a_{n+1}^2=2m^2$.</p>
Does the series $\sum_{n\ge1}\int_0^1\left(1-(1-t^n)^{1/n}\right)\,dt$ converge?
2,097,522
8
https://math.stackexchange.com/questions/2097522/does-the-series-sum-n-ge1-int-01-left1-1-tn1-n-right-dt-converge
<p>Here is a question that I've been working on, a few years ago. I do know how to solve it but I am convinced that it deserves at least another point of view ...</p> <p>I will post my own solution soon (within a week, at most) and I hope that - meanwhile - other people will suggest various approaches.</p> <p>Consider, for all $n\in\mathbb{N^\star}$ :</p> <p>$$u_n=\int_0^1\left(1-(1-t^n)^{1/n}\right)\,dt$$</p> <p>Does the series $\sum_{n\ge 1}u_n$ converge ?</p>
[ { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" }, { "answer_id": 2097809, "score": 6, "text": "<p>We have </p>\n\n<p>$$\\begin{align}\n(1-t^n)^{1/n}&amp;=e^{\\frac1n \\log(1-t^n)}\\\\\\\\\n&amp;\\ge 1+\\frac1n\\log(1-t^n)\n\\end{align}$$</p>\n\n<p>from which </p>\n\n<p>$$\\begin{align}\n\\int_0^1 (1-t^n)^{1/n}\\,dt&amp;\\ge 1+\\frac1n \\int_0^1 \\log(1-t^n)\\,dt\\\\\\\\\n&amp;=1+\\frac1{n^2} \\int_0^1 \\frac{\\log(1-t)}{t}t^{1/n}\\,dt\\\\\\\\\n&amp;\\ge 1+\\frac{1}{n^2}\\int_0^1 \\frac{\\log(1-t)}{t}\\,dt\\\\\\\n&amp;= 1-\\frac{\\pi^2}{6n^2} \n\\end{align}$$</p>\n\n<p>Hence, </p>\n\n<p>$$0\\le \\int_0^1 \\left(1-(1-t^n)^{1/n}\\right)\\,dt\\le \\frac{\\pi^2}{6n^2}$$</p>\n\n<p>By the comparison test, the series converges.</p>\n" }, { "answer_id": 2097802, "score": 4, "text": "<p>Here is a rather elementary solution.</p>\n\n<p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \\geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p>\n\n<ul>\n<li>$ D_1 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } x \\leq a_n \\} $</li>\n<li>$ D_2 = \\{(x, y) \\in [0,1]^2 : x^n + y^n \\geq 1 \\text{ and } y \\leq a_n \\} $</li>\n<li>$ D_3 = \\{(x, y) \\in [0,1]^2 : x, y \\geq a_n \\} $</li>\n</ul>\n\n<p>$\\hspace{8em}$ <a href=\"https://i.stack.imgur.com/9N10Q.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/9N10Q.png\" alt=\"enter image description here\"></a></p>\n\n<p>Then it is easy to check that $D = D_1 \\cup D_2 \\cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p>\n\n<p>\\begin{align*}\nu_n\n&amp;= 2\\text{[Area of $D_1$]} + \\text{[Area of $D_3$]} \\\\\n&amp;= 2\\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \\, dx + (1 - a_n)^2.\n\\end{align*}</p>\n\n<p>Since $a_n = 1 - \\mathcal{O}(\\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \\in [0, a_n]$,</p>\n\n<p>\\begin{align*}\n1 - (1-x^n)^{1/n}\n&amp;= 1 - e^{\\frac{1}{n}\\log(1-x^n)}\n \\stackrel{\\text{(1)}}{\\leq} -\\frac{1}{n}\\log(1-x^n) \\\\\n&amp;= \\int_{0}^{x} \\frac{t^{n-1}}{1-t^n} \\, dt\n \\stackrel{\\text{(2)}}{\\leq} \\int_{0}^{x} 2t^{n-1} \\, dt \\\\\n&amp;= \\frac{2}{n}x^n.\n\\end{align*}</p>\n\n<p>For $\\text{(1)}$, we used the inequality $e^t \\geq 1 + t$ which holds for all real $t$. The second inequality $\\text{(2)}$ follows from the fact that $1-t^n \\geq \\frac{1}{2}$ for $t \\in [0,a_n]$. Thus</p>\n\n<p>$$ u_n \\leq \\frac{4}{n}\\int_{0}^{a_n} x^n \\, dx + (1-a_n)^2 \\leq \\frac{4}{n(n+1)} + (1-a_n)^2 = \\mathcal{O}\\left(\\frac{1}{n^2}\\right). $$</p>\n\n<p>This proves the convergence of $\\sum_{n=1}^{\\infty} u_n$.</p>\n" }, { "answer_id": 2097707, "score": 2, "text": "<p>Using Euler's Beta function, we are simply asked about the convergence of\n$$ \\sum_{n\\geq 1}\\left(1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)}\\right) $$\nwhere $\\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since\n$$ \\log\\Gamma(1+x) = -\\gamma x+\\frac{\\zeta(2)}{2!}x^2-\\frac{\\zeta(3)}{3}x^3+\\frac{\\zeta(4)}{4}x^4-\\ldots $$\nwe have that \n$$ 1-\\frac{\\Gamma\\left(1+\\tfrac{1}{n}\\right)^2}{\\Gamma\\left(1+\\tfrac{2}{n}\\right)} \\sim \\frac{\\pi^2}{6n^2} $$\nso the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\\times[0,1]$ square given by $x^{n+1}+y^{n+1}\\leq 1 \\leq x^n+y^n$, then\n$$ A_1+2A_2+3 A_3+\\ldots $$\nis a convergent series. That can be proved by elementary means, too.</p>\n" } ]
Question: Does the series $\sum_{n\ge1}\int_0^1\left(1-(1-t^n)^{1/n}\right)\,dt$ converge? Answer 1: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 2: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 3: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 4: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 5: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 6: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 7: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 8: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 9: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 10: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 11: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 12: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 13: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 14: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 15: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 16: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 17: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 18: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 19: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 20: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 21: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 22: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 23: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 24: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 25: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 26: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 27: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 28: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 29: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 30: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 31: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 32: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 33: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 34: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 35: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 36: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 37: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 38: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 39: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 40: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 41: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 42: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 43: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 44: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 45: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p> Answer 46: <p>We have </p> <p>$$\begin{align} (1-t^n)^{1/n}&amp;=e^{\frac1n \log(1-t^n)}\\\\ &amp;\ge 1+\frac1n\log(1-t^n) \end{align}$$</p> <p>from which </p> <p>$$\begin{align} \int_0^1 (1-t^n)^{1/n}\,dt&amp;\ge 1+\frac1n \int_0^1 \log(1-t^n)\,dt\\\\ &amp;=1+\frac1{n^2} \int_0^1 \frac{\log(1-t)}{t}t^{1/n}\,dt\\\\ &amp;\ge 1+\frac{1}{n^2}\int_0^1 \frac{\log(1-t)}{t}\,dt\\\ &amp;= 1-\frac{\pi^2}{6n^2} \end{align}$$</p> <p>Hence, </p> <p>$$0\le \int_0^1 \left(1-(1-t^n)^{1/n}\right)\,dt\le \frac{\pi^2}{6n^2}$$</p> <p>By the comparison test, the series converges.</p> Answer 47: <p>Here is a rather elementary solution.</p> <p>Notice that $u_n$ is the area of the region $D$ in the unit square $[0,1]^2$ defined by $x^n + y^n \geq 1$. Now let $a_n = 2^{-1/n}$ and we split $D$ into three parts,</p> <ul> <li>$ D_1 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } x \leq a_n \} $</li> <li>$ D_2 = \{(x, y) \in [0,1]^2 : x^n + y^n \geq 1 \text{ and } y \leq a_n \} $</li> <li>$ D_3 = \{(x, y) \in [0,1]^2 : x, y \geq a_n \} $</li> </ul> <p>$\hspace{8em}$ <a href="https://i.stack.imgur.com/9N10Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9N10Q.png" alt="enter image description here"></a></p> <p>Then it is easy to check that $D = D_1 \cup D_2 \cup D_3$ and they are non-overlapping. Also, exploiting the symmetry, we check that $D_1$ and $D_2$ have the same area. So it follows that</p> <p>\begin{align*} u_n &amp;= 2\text{[Area of $D_1$]} + \text{[Area of $D_3$]} \\ &amp;= 2\int_{0}^{a_n} (1 - (1-x^n)^{1/n}) \, dx + (1 - a_n)^2. \end{align*}</p> <p>Since $a_n = 1 - \mathcal{O}(\frac{1}{n})$, the term $(1-a_n)^2$ is good. For the integral term, notice that for $x \in [0, a_n]$,</p> <p>\begin{align*} 1 - (1-x^n)^{1/n} &amp;= 1 - e^{\frac{1}{n}\log(1-x^n)} \stackrel{\text{(1)}}{\leq} -\frac{1}{n}\log(1-x^n) \\ &amp;= \int_{0}^{x} \frac{t^{n-1}}{1-t^n} \, dt \stackrel{\text{(2)}}{\leq} \int_{0}^{x} 2t^{n-1} \, dt \\ &amp;= \frac{2}{n}x^n. \end{align*}</p> <p>For $\text{(1)}$, we used the inequality $e^t \geq 1 + t$ which holds for all real $t$. The second inequality $\text{(2)}$ follows from the fact that $1-t^n \geq \frac{1}{2}$ for $t \in [0,a_n]$. Thus</p> <p>$$ u_n \leq \frac{4}{n}\int_{0}^{a_n} x^n \, dx + (1-a_n)^2 \leq \frac{4}{n(n+1)} + (1-a_n)^2 = \mathcal{O}\left(\frac{1}{n^2}\right). $$</p> <p>This proves the convergence of $\sum_{n=1}^{\infty} u_n$.</p> Answer 48: <p>Using Euler's Beta function, we are simply asked about the convergence of $$ \sum_{n\geq 1}\left(1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)}\right) $$ where $\Gamma(1+x)$ is a regular function in a neighbourhood of $x=0$. Since $$ \log\Gamma(1+x) = -\gamma x+\frac{\zeta(2)}{2!}x^2-\frac{\zeta(3)}{3}x^3+\frac{\zeta(4)}{4}x^4-\ldots $$ we have that $$ 1-\frac{\Gamma\left(1+\tfrac{1}{n}\right)^2}{\Gamma\left(1+\tfrac{2}{n}\right)} \sim \frac{\pi^2}{6n^2} $$ so the given series is convergent for sure. Geometrically, we are saying that if $A_n$ is the area of the region in the $[0,1]\times[0,1]$ square given by $x^{n+1}+y^{n+1}\leq 1 \leq x^n+y^n$, then $$ A_1+2A_2+3 A_3+\ldots $$ is a convergent series. That can be proved by elementary means, too.</p>
Two disjoint convex closed sets that cannot be separated by a closed hyperplane?
2,099,712
8
https://math.stackexchange.com/questions/2099712/two-disjoint-convex-closed-sets-that-cannot-be-separated-by-a-closed-hyperplane
<p>When I was reading "Functional Analysis" written by Brezis, I noticed the following counterexample, saying that two disjoint closed convex sets may not be separated by a closed hyperplane. </p> <p>Let $E=l^{1}$ (the space of real, absolutely convergent series) and define $$X=\{x=(x_{n})_{n\geq1}\in E:x_{2n}=0 \ \forall n\geq 1\}$$ and $$Y=\{y=(y_{n})_{n\geq1}\in E:y_{2n}=\frac {1}{2^{n}}y_{2n-1} \ \forall n\geq 1\}$$ (i) Show that $X$ and $Y$ are closed linear spaces and $\overline{X+Y}=E$. </p> <p>(ii) Let $c=(c_{n})_{n\geq1}\in E$ be defined by $c_{2n}=\frac{1}{2^{n}}$ and $c_{2n-1}=0$ for all $n\geq 1$. Check that $c\notin X+Y$.</p> <p>(iii) Set $Z=X-c$ (then $Y\bigcap Z=\emptyset$). Show that $Y$ and $Z$ cannot be separated by a closed hyperplane.</p> <p>I can prove that (i) and (ii) hold. But I'm stuck in part (iii). How can I prove $Y$ and $Z$ cannot be separated by a closed hyperplane? Thanks!</p>
[ { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" }, { "answer_id": 2099830, "score": 9, "text": "<p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\\varphi:E\\to \\mathbb{R}$ and some $\\alpha\\in\\mathbb{R}$ such that $\\varphi(x-c)&gt;\\alpha&gt;\\varphi(y)$ for all $x\\in X, y\\in Y$. Note that since $Y$ is a vector space we must have that $\\varphi(Y)=\\{0\\}$ (because otherwise its image will be $\\mathbb{R}$). Similarly, $\\varphi(X)=\\{0\\}$ because otherwise $\\varphi(X-c)=\\varphi(X)-\\varphi(c)=\\mathbb{R}-\\varphi(c)=\\mathbb{R}$. We conclude that $X+Y\\subseteq \\ker(\\varphi)$, and from continuity $E=\\overline{X+Y}\\subseteq \\ker(\\varphi)$ and you get a contradiction to the separation.</p>\n" }, { "answer_id": 4220770, "score": 1, "text": "<p>As requested by another user, here is a proof of (i):<br />\nConsider the continuous projections <span class=\"math-container\">$\\pi_k:E\\to\\Bbb R, (c_n)_n\\mapsto c_k$</span>. We have <span class=\"math-container\">$X=\\bigcap_{n=1}^\\infty \\pi_{2n}^{-1}(0)$</span>, so <span class=\"math-container\">$X$</span> is closed as a intersection of closed subsets. Similarly <span class=\"math-container\">$Y=\\bigcap_{n=1}^\\infty (\\pi_{2n}-\\frac{1}{2^n}\\pi_{2n-1})^{-1}(0)$</span> is closed.<br />\nNow let <span class=\"math-container\">$\\varepsilon&gt;0$</span> and <span class=\"math-container\">$(c_n)_n\\in E$</span>. Let <span class=\"math-container\">$N\\in\\Bbb N$</span> big enough such that <span class=\"math-container\">$\\sum_{n&gt;N}|c_n|&lt;\\varepsilon$</span>. We may assume that <span class=\"math-container\">$N$</span> is even. Then define <span class=\"math-container\">$(y_n)_n\\in Y$</span> as <span class=\"math-container\">$$(y_n)=(*,c_2,*,c_4,*,\\dots,*,c_N,0,0,\\dots)$$</span>\nNote that the <span class=\"math-container\">$*$</span>-values are uniquely determined by the condition <span class=\"math-container\">$(y_n)\\in Y$</span>.<br />\nThen define <span class=\"math-container\">$(x_n)_n\\in E$</span> by <span class=\"math-container\">$x_k=c_k-y_k$</span> for <span class=\"math-container\">$k\\leq N$</span> and <span class=\"math-container\">$x_k=0$</span> for <span class=\"math-container\">$k&gt;N$</span>. Note that by definition <span class=\"math-container\">$(x_n)_n\\in X$</span>. We have:\n<span class=\"math-container\">\\begin{align*}\n\\|((x_n)_n+(y_n)_n)-(c_n)_n\\|&amp;=\\sum_{n=1}^N |x_n+y_n-c_n|+\\sum_{n=N+1}^\\infty|x_n+y_n-c_n|\\\\\n&amp;=\\sum_{n=N+1}^\\infty|c_n|\\\\\n&amp;&lt;\\varepsilon\n\\end{align*}</span>\nThus <span class=\"math-container\">$(c_n)_n\\in \\overline{X+Y}$</span>.</p>\n" } ]
Question: Two disjoint convex closed sets that cannot be separated by a closed hyperplane? Answer 1: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 2: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 3: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 4: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 5: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 6: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 7: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 8: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 9: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 10: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 11: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 12: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 13: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 14: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 15: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 16: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 17: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 18: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 19: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 20: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 21: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 22: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 23: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 24: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 25: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 26: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 27: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 28: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 29: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 30: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p> Answer 31: <p>Suppose that they are separated by a closed hyperplane, so that you have a continuous functional $\varphi:E\to \mathbb{R}$ and some $\alpha\in\mathbb{R}$ such that $\varphi(x-c)&gt;\alpha&gt;\varphi(y)$ for all $x\in X, y\in Y$. Note that since $Y$ is a vector space we must have that $\varphi(Y)=\{0\}$ (because otherwise its image will be $\mathbb{R}$). Similarly, $\varphi(X)=\{0\}$ because otherwise $\varphi(X-c)=\varphi(X)-\varphi(c)=\mathbb{R}-\varphi(c)=\mathbb{R}$. We conclude that $X+Y\subseteq \ker(\varphi)$, and from continuity $E=\overline{X+Y}\subseteq \ker(\varphi)$ and you get a contradiction to the separation.</p> Answer 32: <p>As requested by another user, here is a proof of (i):<br /> Consider the continuous projections <span class="math-container">$\pi_k:E\to\Bbb R, (c_n)_n\mapsto c_k$</span>. We have <span class="math-container">$X=\bigcap_{n=1}^\infty \pi_{2n}^{-1}(0)$</span>, so <span class="math-container">$X$</span> is closed as a intersection of closed subsets. Similarly <span class="math-container">$Y=\bigcap_{n=1}^\infty (\pi_{2n}-\frac{1}{2^n}\pi_{2n-1})^{-1}(0)$</span> is closed.<br /> Now let <span class="math-container">$\varepsilon&gt;0$</span> and <span class="math-container">$(c_n)_n\in E$</span>. Let <span class="math-container">$N\in\Bbb N$</span> big enough such that <span class="math-container">$\sum_{n&gt;N}|c_n|&lt;\varepsilon$</span>. We may assume that <span class="math-container">$N$</span> is even. Then define <span class="math-container">$(y_n)_n\in Y$</span> as <span class="math-container">$$(y_n)=(*,c_2,*,c_4,*,\dots,*,c_N,0,0,\dots)$$</span> Note that the <span class="math-container">$*$</span>-values are uniquely determined by the condition <span class="math-container">$(y_n)\in Y$</span>.<br /> Then define <span class="math-container">$(x_n)_n\in E$</span> by <span class="math-container">$x_k=c_k-y_k$</span> for <span class="math-container">$k\leq N$</span> and <span class="math-container">$x_k=0$</span> for <span class="math-container">$k&gt;N$</span>. Note that by definition <span class="math-container">$(x_n)_n\in X$</span>. We have: <span class="math-container">\begin{align*} \|((x_n)_n+(y_n)_n)-(c_n)_n\|&amp;=\sum_{n=1}^N |x_n+y_n-c_n|+\sum_{n=N+1}^\infty|x_n+y_n-c_n|\\ &amp;=\sum_{n=N+1}^\infty|c_n|\\ &amp;&lt;\varepsilon \end{align*}</span> Thus <span class="math-container">$(c_n)_n\in \overline{X+Y}$</span>.</p>
&quot;In a party people shake hands with one another&quot;
2,099,740
8
https://math.stackexchange.com/questions/2099740/in-a-party-people-shake-hands-with-one-another
<blockquote> <p>In a party people shake hands with one another (not necessarily everyone with everyone else).<br> (a) Show that 2 people shake hands the same no. of times.<br> (b) Show that the number of people who shake hands an odd no. of time is even. </p> </blockquote> <p>I find this question to be very tough and I can't find a way to even start answering this. It would be great if I could get a small hint in order for me to at least start it.</p>
[ { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" }, { "answer_id": 2099746, "score": 10, "text": "<p><strong>Hints:</strong> </p>\n\n<p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p>\n\n<p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p>\n\n<p>Is there some way to use the pigeon-hole principle here?</p>\n\n<hr>\n\n<p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called \"the handshaking lemma.\"</p>\n\n<p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p>\n" }, { "answer_id": 2099756, "score": 5, "text": "<p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p>\n\n<p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p>\n" }, { "answer_id": 2100171, "score": 2, "text": "<p>For (B) try induction. What happens when the next two people shake hands?</p>\n" }, { "answer_id": 4327550, "score": 0, "text": "<p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0.\nBut if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests.\nthis is my answer for the this question. I hope it well help you.\nthank you</p>\n" } ]
Question: &quot;In a party people shake hands with one another&quot; Answer 1: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 2: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 3: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 4: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 5: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 6: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 7: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 8: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 9: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 10: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 11: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 12: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 13: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 14: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 15: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 16: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 17: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 18: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 19: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 20: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 21: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 22: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 23: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 24: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 25: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 26: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 27: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 28: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 29: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 30: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 31: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 32: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 33: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 34: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 35: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 36: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 37: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 38: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 39: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 40: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 41: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 42: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 43: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 44: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 45: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 46: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 47: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 48: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 49: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 50: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 51: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 52: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 53: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 54: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 55: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 56: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 57: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 58: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 59: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 60: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p> Answer 61: <p><strong>Hints:</strong> </p> <p>For (A) with $n$ people at the party, what are the possible number of hands each person could shake?</p> <p>Show that it is impossible for both someone to have shaken hands with everyone and someone to have shaken hands with noone simultaneously.</p> <p>Is there some way to use the pigeon-hole principle here?</p> <hr> <p>For (B) count the number of handshakes that occur by adding up how many handshakes people participated in individually and recognize that this overcounted somehow (because each handshake we counted twice: once from the shorter person of the two and again from the taller person). This gives us a useful result called "the handshaking lemma."</p> <p>What happens then if there are an odd number of people who shook an odd number of hands? Remember that the number of handshakes must be a whole number.</p> Answer 62: <p>$(a)$ Say that there is $n$ people in the party and assume everyone shakes hands at least once. The number of handshakes possible for everybody then is just $n-1$ (self-handshaking doesn't count) while there is $n$ people in the party, so the pingeon-hole principle gives the desired result. Now, if we don't assume that everyone shakes at least one hand then we can separate the $n$ people between $k$ which don't shake any hands and $m$ which do ($k+m=n$), and the problem is reduced to the last case (with $n=m$).</p> <p>$(b)$ If a person $A$ shakes hands with a person $B$, then $B$ shakes hands with $A$. Silly as it sounds this tells us that we can count handshakes in pairs, so the total number of handshakes must be even. Counting separately the handshakes of those who have given an even number of handshakes and those who have given an odd number of handshakes leads to the desired conclusion.</p> Answer 63: <p>For (B) try induction. What happens when the next two people shake hands?</p> Answer 64: <p>Assume that a party has n number of guests, then each of that guests has to do hand shake between n-1 and 0. But if any of guests has 0 hand shake, then no guest can have hand shake of n-1 , so it is not possible for the hand shakes in the party and for the guests to have both 0 and n-1, thus the number of hand shakes in the party can only have n-1 different hand shake so by the pigeon hole principals at least two people must have the some number of hand shake in the party . so in here the pigeons are the number of hand shakes and the pigeon holes are the number of the guests. this is my answer for the this question. I hope it well help you. thank you</p>
Are homeomorphisms convex-preserving?
2,099,787
8
https://math.stackexchange.com/questions/2099787/are-homeomorphisms-convex-preserving
<p>If $A\subset \mathbb R^n$ is a convex subset and $A$ is homeomorphic to a subset $B\subset R^n$, can I also say $B$ is convex? Any counterexamples?</p>
[ { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" }, { "answer_id": 2099790, "score": 12, "text": "<p>Define, for $(x,y) \\in \\mathbb R^2$:\n$$ \\varphi(x,y) := (x-|y|,y)\n$$\nThen $\\varphi$ is a homeomorphism $\\mathbb R^2 \\to \\mathbb R^2$ but the image of $\\{(x,y)\\mid x = 0\\}$ under $\\varphi$ is not convex.</p>\n" }, { "answer_id": 2099797, "score": 10, "text": "<p>My preferred is $\\varphi:(x,y)\\mapsto (x,y^3)$. It is a homeomorphism ($\\varphi^{-1}:(x,y)\\mapsto (x,\\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>\n" } ]
Question: Are homeomorphisms convex-preserving? Answer 1: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 2: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 3: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 4: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 5: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 6: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 7: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 8: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 9: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 10: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 11: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 12: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 13: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 14: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 15: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 16: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 17: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 18: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 19: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 20: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 21: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 22: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 23: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 24: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 25: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 26: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 27: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 28: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 29: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 30: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p> Answer 31: <p>Define, for $(x,y) \in \mathbb R^2$: $$ \varphi(x,y) := (x-|y|,y) $$ Then $\varphi$ is a homeomorphism $\mathbb R^2 \to \mathbb R^2$ but the image of $\{(x,y)\mid x = 0\}$ under $\varphi$ is not convex.</p> Answer 32: <p>My preferred is $\varphi:(x,y)\mapsto (x,y^3)$. It is a homeomorphism ($\varphi^{-1}:(x,y)\mapsto (x,\sqrt[3]{y}$) and the image of the first diagonal is $y=x^3$ which is not convex.</p>
How to evaluate $\lim _{x\to \infty }\left(\frac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)}$?
2,100,087
8
https://math.stackexchange.com/questions/2100087/how-to-evaluate-lim-x-to-infty-left-fracx3-sqrtx2-5x-rightx2
<blockquote> <p>How to evaluate $\lim _{x\to \infty }\left(\frac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)}$?</p> </blockquote> <p><strong>My Try:</strong></p> <p>$$\lim _{x\to \infty }\left(x^2\sin\left(\frac{1}{x}\right)\ln\left(\frac{x+3}{\sqrt{x^2-5x}}\right)\right) = \lim _{t\to 0 }\left(\frac{1}{t^2}\sin\left(t\right)\ln\left(\frac{\frac{1}{t}+3}{\sqrt{\frac{1}{t^2}-\frac{5}{t}}}\right)\right)$$ Now $\sin(x) \approx x, x \rightarrow 0$ so: $$\approx \lim _{t\to 0 }\left(\frac{1}{t}ln\left(\frac{\left(3t+1\right)\sqrt{-5t+1}}{1-5t}\right)\right)$$</p> <p>At this point i used the rule of the de l'Hôpital so: $$\lim _{t\to 0 }\left(\frac{1}{t}ln\left(\frac{\left(3t+1\right)\sqrt{-5t+1}}{1-5t}\right)\right) = \lim _{t\to 0}\left(\frac{\frac{-15t+11}{2\left(-5t+1\right)\left(3t+1\right)}}{1}\right) = \frac{11}{2}$$ So: $$\lim _{x\to \infty }\left(\left(\frac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(\frac{1}{x}\right)}\right) = \color{red}{e^\frac{11}{2}}$$ Which it is the exact result of the proposed limit.<br> My question is, there is another method, different from mine to get the same result? (Preferably without resorting to de l'Hôpital rule).</p>
[ { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" }, { "answer_id": 2100209, "score": 2, "text": "<p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p>\n\n<p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that\n$$\n\\frac{x+3}{\\sqrt{x^2-5x}}=\\frac{1+3t}{\\sqrt{1-5t}},\n$$\nso you have\n$$\n\\lim_{t\\to0^+}\\frac{\\sin t}{t^2}\\ln\\frac{1+3t}{\\sqrt{1-5t}}=\n\\lim_{t\\to0^+}\\frac{\\ln(1+3t)-\\frac{1}{2}\\ln(1-5t)}{t}\n$$\nowing to $\\lim_{t\\to0}\\frac{\\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p>\n\n<p>This can be rewritten\n$$\n3\\lim_{t\\to0^+}\\frac{\\ln(1+3t)}{3t}+\n\\frac{5}{2}\\lim_{t\\to0^+}\\frac{\\ln(1-5t)}{-5t}=3+\\frac{5}{2}\n$$\nor with Taylor up to degree $1$,\n$$\n\\lim_{t\\to0^+}\\frac{3t+\\frac{5}{2}t+o(t)}{t}=3+\\frac{5}{2}\n$$</p>\n" }, { "answer_id": 2100159, "score": 1, "text": "<p>Notice that\n$$\\frac{x+3}{\\sqrt{x^2-5x}} = \\left(\\dfrac{x^2+6x+9}{x^2-5x}\\right)^{1/2} = \\left(1+ \\dfrac{11x+9}{x^2-5x}\\right)^{1/2}.$$</p>\n\n<p>By setting $y = \\dfrac{x^2-5x}{11x+9}$ and since $x \\rightarrow \\infty \\implies y \\rightarrow \\infty$, we obtain</p>\n\n<p>$$ x = \\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{2}.$$</p>\n\n<p>Thus, $$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x/2} = \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{\\left(\\dfrac{5+11y + \\sqrt{121y^2+146y+25}}{4}\\right)} = $$\n$$= \\lim_{y \\rightarrow \\infty} \\left( 1+\\dfrac{1}{y}\\right)^{5/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4}\\cdot \\left( 1+\\dfrac{1}{y}\\right)^{11y/4\\cdot \\left(\\sqrt{1+146/(121y)+25/(121y^2)}\\right)} = e^{11/2},$$</p>\n\n<p>Since $g(y) = \\sqrt{1+ \\frac{146}{121y}+\\frac{25}{121y^2}}$ is continuous and $\\lim_{y \\rightarrow \\infty} g(y)$ exists.</p>\n\n<p>Yet, $$\\lim_{x \\rightarrow \\infty}\\dfrac{\\sin(1/x)}{1/x} = 1.$$</p>\n\n<p>Therefore, we have </p>\n\n<p>$$\\lim_{x \\rightarrow \\infty} \\left(\\dfrac{x+3}{\\sqrt{x^2-5x}}\\right)^{x^2\\sin\\left(1/x\\right)} = e^{11/2}.$$</p>\n" } ]
Question: How to evaluate $\lim _{x\to \infty }\left(\frac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)}$? Answer 1: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 2: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 3: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 4: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 5: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 6: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 7: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 8: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 9: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 10: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 11: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 12: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 13: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 14: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 15: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 16: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 17: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 18: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 19: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 20: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 21: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 22: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 23: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 24: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 25: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 26: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 27: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 28: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 29: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 30: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p> Answer 31: <p>You're making your own life more difficult. <code>;-)</code> But your idea is good.</p> <p>After taking the logarithm, apply the substitution $x=1/t$ where it's not restrictive to assume $x&gt;0$ (actually, $x&gt;5$); note that $$ \frac{x+3}{\sqrt{x^2-5x}}=\frac{1+3t}{\sqrt{1-5t}}, $$ so you have $$ \lim_{t\to0^+}\frac{\sin t}{t^2}\ln\frac{1+3t}{\sqrt{1-5t}}= \lim_{t\to0^+}\frac{\ln(1+3t)-\frac{1}{2}\ln(1-5t)}{t} $$ owing to $\lim_{t\to0}\frac{\sin t}{t}=1$ (of course, conditionally to the existence of the last limit).</p> <p>This can be rewritten $$ 3\lim_{t\to0^+}\frac{\ln(1+3t)}{3t}+ \frac{5}{2}\lim_{t\to0^+}\frac{\ln(1-5t)}{-5t}=3+\frac{5}{2} $$ or with Taylor up to degree $1$, $$ \lim_{t\to0^+}\frac{3t+\frac{5}{2}t+o(t)}{t}=3+\frac{5}{2} $$</p> Answer 32: <p>Notice that $$\frac{x+3}{\sqrt{x^2-5x}} = \left(\dfrac{x^2+6x+9}{x^2-5x}\right)^{1/2} = \left(1+ \dfrac{11x+9}{x^2-5x}\right)^{1/2}.$$</p> <p>By setting $y = \dfrac{x^2-5x}{11x+9}$ and since $x \rightarrow \infty \implies y \rightarrow \infty$, we obtain</p> <p>$$ x = \dfrac{5+11y + \sqrt{121y^2+146y+25}}{2}.$$</p> <p>Thus, $$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x/2} = \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{\left(\dfrac{5+11y + \sqrt{121y^2+146y+25}}{4}\right)} = $$ $$= \lim_{y \rightarrow \infty} \left( 1+\dfrac{1}{y}\right)^{5/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4}\cdot \left( 1+\dfrac{1}{y}\right)^{11y/4\cdot \left(\sqrt{1+146/(121y)+25/(121y^2)}\right)} = e^{11/2},$$</p> <p>Since $g(y) = \sqrt{1+ \frac{146}{121y}+\frac{25}{121y^2}}$ is continuous and $\lim_{y \rightarrow \infty} g(y)$ exists.</p> <p>Yet, $$\lim_{x \rightarrow \infty}\dfrac{\sin(1/x)}{1/x} = 1.$$</p> <p>Therefore, we have </p> <p>$$\lim_{x \rightarrow \infty} \left(\dfrac{x+3}{\sqrt{x^2-5x}}\right)^{x^2\sin\left(1/x\right)} = e^{11/2}.$$</p>
Prove that there are at least 100 pairs of usable boots
2,101,142
8
https://math.stackexchange.com/questions/2101142/prove-that-there-are-at-least-100-pairs-of-usable-boots
<p>A store has 200 boots of size A, 200 boots of size B and 200 boots of size C. Among these 600 boots, 300 are of the left foot and 300 are of the right foot. Knowing that usable pairs of boots have the same size and are for different feet, prove that it's possible to find at least 100 pairs of usable boots.</p>
[ { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" }, { "answer_id": 2101231, "score": 6, "text": "<p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p>\n\n<p>Otherwise every size will have a majority of one foot. Since $101\\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p>\n" }, { "answer_id": 2101183, "score": 1, "text": "<p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p>\n\n<p>We first note that the number of pairs of type $A$ is given by the $\\min\\{a_L,a_R\\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\\min\\{a_L,200-a_L\\}$.</p>\n\n<p>Now, calling $p$ the number of pairs we see clearly that</p>\n\n<p>$p = \\min\\{a_L,200-a_L\\} + \\min\\{b_L,200-b_L\\} + \\min\\{c_L,200-c_L\\}$.</p>\n\n<p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p>\n\n<p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p>\n\n<p>but $c_L \\leq 200$ hence, $p\\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p>\n\n<p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \\geq 100$.</p>\n\n<p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p>\n\n<p>$p = 600 - a_L - b_L - c_L = 300$.</p>\n" }, { "answer_id": 2101314, "score": 1, "text": "<p><em>(Alternative solution)</em> </p>\n\n<p>Let $\\alpha, \\beta,\\gamma \\in \\mathbb Z\\leq 100 $</p>\n\n<p>Number of boots are tabulated as follows:</p>\n\n<p>$$\\begin{array}&amp;&amp;\n\\hline\n\\bf\\text{Left}&amp;\\bf\\text{Right}&amp;\\bf\\text{Total}&amp;\\bf\\text{Usable Pairs}\\\\\n\\hline\n\\bf\\text{Size A}&amp;100+\\alpha &amp;100-\\alpha &amp;200&amp;100-\\alpha\\\\\n\\bf\\text{Size B}&amp;100-\\beta &amp;100+\\beta &amp;200 &amp;100-\\beta\\\\\n\\bf\\text{Size C}&amp;100+\\gamma &amp;100-\\gamma &amp;200 &amp;100-\\gamma\\\\\n\\hline\n\\bf\\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\\alpha+\\beta+\\gamma)\\\\\n\\hline\n\\end{array}$$</p>\n\n<p>For total Left and Right boots to equal $300$ each, $\\alpha-\\beta+\\gamma=0 \\Rightarrow \\beta=\\alpha+\\gamma$. </p>\n\n<p>Number of usable pairs is given by\n$$\\begin{align}\nN=300-(\\alpha+\\beta+\\gamma)&amp;=300-2\\beta\\\\\nN_{\\text{max}}=N\\big|_{\\beta=0}&amp;=300\\\\\nN_{\\text{min}}=N\\big|_{\\beta=100}&amp;=100\\\\\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><em>(Original solution below)</em></p>\n\n<p>Consider the extreme case where </p>\n\n<ul>\n<li>all 200 size A boots are Left (L), and </li>\n<li>all 200 size B boots are Right (R), </li>\n</ul>\n\n<p>i.e. all unusable. </p>\n\n<p>This leaves 100 R and 100 L boots. These must all be of size C. </p>\n\n<p>Hence there are 100 pairs of usable boots.</p>\n" }, { "answer_id": 2101213, "score": 0, "text": "<p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p>\n\n<p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p>\n\n<p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p>\n\n<p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p>\n\n<p>If $c_1$ contains 198 left boots, we repeat the above argument. </p>\n\n<p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p>\n\n<p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>\n" } ]
Question: Prove that there are at least 100 pairs of usable boots Answer 1: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 2: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 3: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 4: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 5: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 6: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 7: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 8: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 9: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 10: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 11: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 12: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 13: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 14: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 15: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 16: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 17: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 18: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 19: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 20: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 21: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 22: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 23: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 24: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 25: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 26: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 27: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 28: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 29: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 30: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 31: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 32: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 33: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 34: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 35: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 36: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 37: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 38: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 39: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 40: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 41: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 42: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 43: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 44: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 45: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 46: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 47: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 48: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 49: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 50: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 51: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 52: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 53: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 54: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 55: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 56: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 57: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 58: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 59: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 60: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p> Answer 61: <p>If any size category $A,B,C$ has equal numbers of left and right boots, there are the required $100$ pairs of usable boots and we are done.</p> <p>Otherwise every size will have a majority of one foot. Since $101\times 3&gt;300$, these cannot all be the same foot, so without loss of generality we can assume that there are two right-foot majorities and one left-foot majority. Now the number of right boots in the right-foot majority sizes is at most $300$, so there are at least $100$ left boots in those two sizes, which can all be matched to right boots, giving us $100$ usable pairs as required.</p> Answer 62: <p>To give you a feel for the problem, here is a proof by cases. Calling $a_L$ the number of size $A$ left foot boots, $b_R$ the number of size $B$ right foot boots, etc.</p> <p>We first note that the number of pairs of type $A$ is given by the $\min\{a_L,a_R\}$. Now, we have $a_L + a_R = 200$, hence $a_R = 200-a_L$. We can use this to say the number of pairs of size $A$ is given by $\min\{a_L,200-a_L\}$.</p> <p>Now, calling $p$ the number of pairs we see clearly that</p> <p>$p = \min\{a_L,200-a_L\} + \min\{b_L,200-b_L\} + \min\{c_L,200-c_L\}$.</p> <p>Assume that the left-footed shoe is the lesser quantity in all of these terms then we get $p = a_L + b_L + c_L$, but we know there are 300 left-footed shoes, hence $p=300$. Assume that the left-footed shoe is the lesser quantity in two of the terms of $p$. Without loss of generality, assume that this is true for size $A$ and $B$. Then we have that </p> <p>$p = a_L + b_L + 200 - c_L = 300 - c_L + 200 - c_L = 500 - 2c_L$,</p> <p>but $c_L \leq 200$ hence, $p\geq 100$. Assume that the left-footed shoe is the lesser quantity in one of the terms of $p$. Without loss of generality, assume that this is true for size $A$. Then we have</p> <p>$p = a_L + 400 - (b_L + c_L) = a_L + 400 - (300 - a_L) = 100 + 2a_L \geq 100$.</p> <p>Lastly, assume that the left-footed shoe is the lesser quantity in none of the terms of $p$. Then</p> <p>$p = 600 - a_L - b_L - c_L = 300$.</p> Answer 63: <p><em>(Alternative solution)</em> </p> <p>Let $\alpha, \beta,\gamma \in \mathbb Z\leq 100 $</p> <p>Number of boots are tabulated as follows:</p> <p>$$\begin{array}&amp;&amp; \hline \bf\text{Left}&amp;\bf\text{Right}&amp;\bf\text{Total}&amp;\bf\text{Usable Pairs}\\ \hline \bf\text{Size A}&amp;100+\alpha &amp;100-\alpha &amp;200&amp;100-\alpha\\ \bf\text{Size B}&amp;100-\beta &amp;100+\beta &amp;200 &amp;100-\beta\\ \bf\text{Size C}&amp;100+\gamma &amp;100-\gamma &amp;200 &amp;100-\gamma\\ \hline \bf\text{Total}&amp;300 &amp;300 &amp;600&amp;300-(\alpha+\beta+\gamma)\\ \hline \end{array}$$</p> <p>For total Left and Right boots to equal $300$ each, $\alpha-\beta+\gamma=0 \Rightarrow \beta=\alpha+\gamma$. </p> <p>Number of usable pairs is given by $$\begin{align} N=300-(\alpha+\beta+\gamma)&amp;=300-2\beta\\ N_{\text{max}}=N\big|_{\beta=0}&amp;=300\\ N_{\text{min}}=N\big|_{\beta=100}&amp;=100\\ \end{align} $$</p> <hr> <p><em>(Original solution below)</em></p> <p>Consider the extreme case where </p> <ul> <li>all 200 size A boots are Left (L), and </li> <li>all 200 size B boots are Right (R), </li> </ul> <p>i.e. all unusable. </p> <p>This leaves 100 R and 100 L boots. These must all be of size C. </p> <p>Hence there are 100 pairs of usable boots.</p> Answer 64: <p>There are three categories- size A shoes, size B shoes, and size C shoes. We are going to place 600 boots into these three categories. Since 300 shoes are of the left foot, by the pigeon-hole principle, at least one category will contain 100 left boots. Now we add in the 300 right boots. </p> <p>Here's the kind of naive way I thought about this problem. The category containing at least 100 left boots has between 100 and 200 left boots- call this category $c_1$. Now there are around 100 cases. </p> <p>If $c_1$ contains 200 left boots, then the remaining 100 are contained in the second two categories which when we fill with right boots will yield 100 pairs of working shoes since neither of these categories will have fewer than 100 spots for right boots.</p> <p>If $c_1$ contains 199 left boots, then the remaining 101 are contained in the second two categories, and we get 1 working pair from $c_1$. If all 101 are contained in one of the remaining two categories, then that category will yield 99 additional working pairs after being filled with right boots. If neither of the remaining two categories contain all 101 left boots, then we get an additional 101 working pairs, since both categories have at least 100 remaining spots for right boots.</p> <p>If $c_1$ contains 198 left boots, we repeat the above argument. </p> <p>This kind of argument will work for $c_1$ containing 200 down to 100 left boots.</p> <p>There is probably a more elegant way of proving this, but I find this first glance type of reasoning more intuitive and still reasonably rigorous.</p>
Prove $n^5+n^4+1$ is not a prime
2,104,682
8
https://math.stackexchange.com/questions/2104682/prove-n5n41-is-not-a-prime
<p>I have to prove that for any $n&gt;1$, the number $n^5+n^4+1$ is not a prime.With induction I have been able to show that it is true for base case $n=2$, since $n&gt;1$.However, I cannot break down the given expression involving fifth and fourth power into simpler terms. Any help?</p>
[ { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" }, { "answer_id": 2104688, "score": 20, "text": "<p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p>\n\n<p>$\\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p>\n\n<p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p>\n\n<p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p>\n" }, { "answer_id": 2104686, "score": 7, "text": "<p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p>\n" }, { "answer_id": 2104695, "score": 5, "text": "<p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$\n$$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$\nI think, the best way is the following:\n$$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>\n" } ]
Question: Prove $n^5+n^4+1$ is not a prime Answer 1: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 2: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 3: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 4: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 5: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 6: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 7: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 8: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 9: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 10: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 11: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 12: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 13: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 14: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 15: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 16: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 17: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 18: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 19: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 20: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 21: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 22: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 23: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 24: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 25: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 26: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 27: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 28: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 29: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 30: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 31: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 32: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 33: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 34: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 35: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 36: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 37: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 38: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 39: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 40: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 41: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 42: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 43: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 44: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 45: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p> Answer 46: <p>$n^5 + n^4 + 1 = n^5 + n^4 + n^3 – n^3 – n^2 − n + n^2 + n + 1$</p> <p>$\implies$$ n^3(n^2 + n + 1) − n(n^2 + n + 1) + (n^2 + n + 1)$</p> <p>=$ (n^2 + n + 1)(n^3 − n + 1)$</p> <p>Hence, for $n&gt;1$, $n^5 + n^4 + 1$ is not a prime number.</p> Answer 47: <p>$n^5+n^4+1=(n^3-n+1)(n^2+n+1)$</p> Answer 48: <p>$$n^5+n^4+1=n^5-n^2+n^4+n^2+1=n^2(n-1)(n^2+n+1)+(n^2+n+1)(n^2-n+1)=$$ $$=(n^2+n+1)(n^3-n^2+n^2-n+1)=(n^2+n+1)(n^3-n+1)$$ I think, the best way is the following: $$n^5+n^4+1=n^5+n^4+n^3-(n^3-1)=(n^2+n+1)(n^3-n+1)$$</p>
Christoffel symbols and metric
2,105,182
8
https://math.stackexchange.com/questions/2105182/christoffel-symbols-and-metric
<p>For Christoffel symbol and metric, we've the following identity </p> <blockquote> <p>$$\large \frac{1}{2}g^{\alpha\gamma}(g_{\alpha\beta,\mu}+g_{\alpha\mu,\beta}- g_{\beta\mu,\alpha})={\mathrm\Gamma^{\gamma}}_{\beta\mu}. $$</p> </blockquote> <p>Now even though I've seen the derivation, I still can't understand what is the motivation behind the steps taken, in all the index juggling being done. Can anyone please give a motivated proof for the identity? I mean that mentioning why we are doing instead of just what we are doing. What if the identity wasn't discovered yet and you were to, what will go inside your head to come up with this useful result and why would you do the following to arrive at it, instead of just Hit and Trial.</p> <blockquote> <p>$$\large \begin{aligned} g_{\alpha\beta,\mu}&amp;={\mathrm\Gamma^{\nu}}_{\alpha\mu}g_{\nu\beta}+ {\mathrm\Gamma^{\nu}}_{\beta\mu}g_{\alpha\nu},\\ g_{\alpha\mu,\beta}&amp;={\mathrm\Gamma^{\nu}}_{\alpha\beta}g_{\nu\mu}+ {\mathrm\Gamma^{\nu}}_{\mu\beta}g_{\alpha\nu},\\ -g_{\beta\mu,\alpha}&amp;=-{\mathrm\Gamma^{\nu}}_{\beta\alpha}g_{\nu\mu}- {\mathrm\Gamma^{\nu}}_{\mu\alpha}g_{\beta\nu}. \end{aligned} $$</p> </blockquote>
[ { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" }, { "answer_id": 2105286, "score": 12, "text": "<p>I'm not exactly sure which \"derivation\" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p>\n\n<p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class=\"math-container\">$M$</span>, and we have vector fields <span class=\"math-container\">$X, Y \\in \\mathcal{X}(M)$</span> defined locally in <span class=\"math-container\">$M$</span>. We want an affine connection <span class=\"math-container\">$\\nabla:\\mathcal{X}(M) \\times \\mathcal{X}(M) \\to \\mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class=\"math-container\">$\\frac{\\partial}{\\partial e^1}, \\ldots, \\frac{\\partial }{\\partial e^n}$</span> be a local basis for the tangent spaces on <span class=\"math-container\">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class=\"math-container\">$x^i, y^j: M \\to \\mathbb{R}$</span>:</p>\n\n<p><span class=\"math-container\">$$\nX \\;\\; =\\;\\; x^i \\frac{\\partial}{\\partial e^i} \\;\\;\\; \\;\\;\\; Y \\;\\; =\\;\\; y^j \\frac{\\partial}{\\partial e^j}.\n$$</span></p>\n\n<p>Then our covariant derivative is given locally by </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\nabla_{x^i\\partial_i} (y^j \\partial_j) \\;\\; =\\;\\; x^i \\nabla_{\\partial_i} (y^j\\partial_j) \\;\\; =\\;\\; x^i \\frac{\\partial y^j}{\\partial e^i} \\frac{\\partial }{\\partial e^j} + x^iy^j \\nabla_{\\partial_i}\\partial_j.\n$$</span></p>\n\n<p>The first term is clearly in the tangent space, but we want to define <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class=\"math-container\">$\\Gamma_{ij}^k: M\\to \\mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_{\\partial_i} \\partial_j \\;\\; \\equiv \\;\\; \\Gamma_{ij}^k \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>Now by manipulating the indices around a little bit we have that</p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY \\;\\; =\\;\\; \\left (x^i \\frac{\\partial y^k}{\\partial e^i} + x^i y^j \\Gamma_{ij}^k \\right ) \\frac{\\partial}{\\partial e^k}.\n$$</span></p>\n\n<p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p>\n\n<p><span class=\"math-container\">$$\nX\\langle Y,Z\\rangle \\;\\; =\\;\\; \\langle \\nabla_XY, Z\\rangle + \\langle Y, \\nabla_XZ\\rangle\n$$</span></p>\n\n<p>while the <strong>symmetry</strong> condition is that </p>\n\n<p><span class=\"math-container\">$$\n\\nabla_XY - \\nabla_YX \\;\\; =\\;\\; [X,Y]\n$$</span></p>\n\n<p>which for the basis vectors reduces to <span class=\"math-container\">$\\nabla_{\\partial_i}\\partial_j - \\nabla_{\\partial_j}\\partial_i = 0$</span>. From these equations we can in general prove that </p>\n\n<p><span class=\"math-container\">$$\n\\langle Z, \\nabla_YX\\rangle \\;\\; =\\;\\; \\frac{1}{2} \\left ( X\\langle Y,Z\\rangle + Y\\langle Z, X\\rangle - Z \\langle X, Y\\rangle - \\langle [X,Z], Y\\rangle - \\langle [Y,Z], X\\rangle - \\langle [X,Y], Z \\rangle \\right ).\n$$</span></p>\n\n<p>If we reduce this down to the basis vectors we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\langle \\partial_s, \\nabla_{\\partial_i}\\partial_j\\rangle \\;\\; =\\;\\; \\Gamma_{ij}^k g_{sk} \\;\\; =\\;\\; \\frac{1}{2} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>Because we have that <span class=\"math-container\">$g^{sm}$</span> is the inverse of <span class=\"math-container\">$g_{sk}$</span> we have that <span class=\"math-container\">$g_{sk} g^{sm} = \\delta_k^m$</span> and thus we obtain</p>\n\n<p><span class=\"math-container\">$$\n\\Gamma_{ij}^m \\;\\; =\\;\\; \\frac{1}{2} g^{ms} \\left ( \\partial_j g_{si} + \\partial_i g_{sj} - \\partial_s g_{ij} \\right ).\n$$</span></p>\n\n<p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class=\"math-container\">$\\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p>\n" }, { "answer_id": 2128939, "score": 0, "text": "<p>The intuition can be explained (without any formula) in the case of a subvariety in $\\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\\phi (x_1,...,x_n)$, where $\\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\\partial \\phi \\over \\partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\\partial \\phi \\over \\partial x_i}$ of the $N$ vector ${\\partial ^2 \\phi \\over \\partial x_i \\partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\\partial ^2 \\phi \\over \\partial x_i \\partial x_j},{\\partial \\phi \\over \\partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>\n" } ]
Question: Christoffel symbols and metric Answer 1: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 2: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 3: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 4: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 5: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 6: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 7: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 8: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 9: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 10: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 11: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 12: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 13: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 14: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 15: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 16: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 17: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 18: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 19: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 20: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 21: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 22: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 23: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 24: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 25: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 26: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 27: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 28: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 29: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 30: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p> Answer 31: <p>I'm not exactly sure which "derivation" you're referring to, but I've always seen Christoffel symbols introduced in the following context:</p> <p>Ultimately the motivation is to define the notion of parallel transport, which means when we take a derivative of a vector field, we want our derivative to still be tangent to our manifold. So let's suppose for instance we have a manifold <span class="math-container">$M$</span>, and we have vector fields <span class="math-container">$X, Y \in \mathcal{X}(M)$</span> defined locally in <span class="math-container">$M$</span>. We want an affine connection <span class="math-container">$\nabla:\mathcal{X}(M) \times \mathcal{X}(M) \to \mathcal{X}(M)$</span> to act as a derivative that remains tangent to the manifold. Let <span class="math-container">$\frac{\partial}{\partial e^1}, \ldots, \frac{\partial }{\partial e^n}$</span> be a local basis for the tangent spaces on <span class="math-container">$M$</span> (defined in the same neighborhood as our vector fields), such that we can express our vector fields locally by functions <span class="math-container">$x^i, y^j: M \to \mathbb{R}$</span>:</p> <p><span class="math-container">$$ X \;\; =\;\; x^i \frac{\partial}{\partial e^i} \;\;\; \;\;\; Y \;\; =\;\; y^j \frac{\partial}{\partial e^j}. $$</span></p> <p>Then our covariant derivative is given locally by </p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \nabla_{x^i\partial_i} (y^j \partial_j) \;\; =\;\; x^i \nabla_{\partial_i} (y^j\partial_j) \;\; =\;\; x^i \frac{\partial y^j}{\partial e^i} \frac{\partial }{\partial e^j} + x^iy^j \nabla_{\partial_i}\partial_j. $$</span></p> <p>The first term is clearly in the tangent space, but we want to define <span class="math-container">$\nabla_{\partial_i}\partial_j$</span> to lie in the tangent space. We therefore <em>define</em> correction functions <span class="math-container">$\Gamma_{ij}^k: M\to \mathbb{R}$</span> known as the <strong>Christoffel symbols</strong> that satisfy</p> <p><span class="math-container">$$ \nabla_{\partial_i} \partial_j \;\; \equiv \;\; \Gamma_{ij}^k \frac{\partial}{\partial e^k}. $$</span></p> <p>Now by manipulating the indices around a little bit we have that</p> <p><span class="math-container">$$ \nabla_XY \;\; =\;\; \left (x^i \frac{\partial y^k}{\partial e^i} + x^i y^j \Gamma_{ij}^k \right ) \frac{\partial}{\partial e^k}. $$</span></p> <p>The above analysis is valid for <em>any</em> affine connection on a manifold, but on a Riemannian manifold we require the connection to be symmetric and compatible with the metric. The <strong>compatibility</strong> condition is that</p> <p><span class="math-container">$$ X\langle Y,Z\rangle \;\; =\;\; \langle \nabla_XY, Z\rangle + \langle Y, \nabla_XZ\rangle $$</span></p> <p>while the <strong>symmetry</strong> condition is that </p> <p><span class="math-container">$$ \nabla_XY - \nabla_YX \;\; =\;\; [X,Y] $$</span></p> <p>which for the basis vectors reduces to <span class="math-container">$\nabla_{\partial_i}\partial_j - \nabla_{\partial_j}\partial_i = 0$</span>. From these equations we can in general prove that </p> <p><span class="math-container">$$ \langle Z, \nabla_YX\rangle \;\; =\;\; \frac{1}{2} \left ( X\langle Y,Z\rangle + Y\langle Z, X\rangle - Z \langle X, Y\rangle - \langle [X,Z], Y\rangle - \langle [Y,Z], X\rangle - \langle [X,Y], Z \rangle \right ). $$</span></p> <p>If we reduce this down to the basis vectors we obtain</p> <p><span class="math-container">$$ \langle \partial_s, \nabla_{\partial_i}\partial_j\rangle \;\; =\;\; \Gamma_{ij}^k g_{sk} \;\; =\;\; \frac{1}{2} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>Because we have that <span class="math-container">$g^{sm}$</span> is the inverse of <span class="math-container">$g_{sk}$</span> we have that <span class="math-container">$g_{sk} g^{sm} = \delta_k^m$</span> and thus we obtain</p> <p><span class="math-container">$$ \Gamma_{ij}^m \;\; =\;\; \frac{1}{2} g^{ms} \left ( \partial_j g_{si} + \partial_i g_{sj} - \partial_s g_{ij} \right ). $$</span></p> <p>This is the intuitive explanation that I like most regarding Christoffel symbols: they are correction functions to make sure that an affine connection keeps the derivative of a vector field within the tangent space of the manifold. On a Riemannian manifold they have the given form above. To get the other equations that you have listed in your question, simply permute the indices to get different copies of the fundamental equation for <span class="math-container">$\Gamma_{ij}^s$</span>, and then subtract/add them together to get the forms you're interested in. </p> Answer 32: <p>The intuition can be explained (without any formula) in the case of a subvariety in $\bf R^n$. In the case the Levi-Cita derivative is obtained by composing the usual derivative with the projection on the tangent space. If you are in a chart $\phi (x_1,...,x_n)$, where $\phi $is a $N$vector, the tangent space is spanned by the (non orthogonal) family of vectors $X_i={\partial \phi \over \partial x_i}$. We have $g_{i,j}= &lt;X_i,X_j&gt;$, and writting the projection onto this space is equivalent to inverse the matrix $g_{i,j}$. In other words $\Gamma _{i,j}^k$ is the $k-$th coordinate in the base $X_i={\partial \phi \over \partial x_i}$ of the $N$ vector ${\partial ^2 \phi \over \partial x_i \partial x_j}$.The two dimensional case is enough to understand how the computation goes. For instnace if we are lucky and the base $X_i$ is othonormal, this Christoffel is just $&lt;{\partial ^2 \phi \over \partial x_i \partial x_j},{\partial \phi \over \partial x_k} &gt;$, and your formula is easy to check. The case of surfaces in $R^3$ already contains all difficulties (see Do Carmo)</p>
Proving an alternate quadratic formula
2,105,240
8
https://math.stackexchange.com/questions/2105240/proving-an-alternate-quadratic-formula
<p>It is well known that the quadratic formula for $ax^2+bx+c=0$ is given by$$x=\dfrac {-b\pm\sqrt{b^2-4ac}}{2a}\tag1$$ Where $a\ne0$. <em>However</em>, I read somewhere else that given $ax^2+bx+c=0$, we have another solution for $x$ as$$x=\dfrac {-2c}{b\pm\sqrt{b^2-4ac}}\tag2$$ Where $c\ne0$. In fact, $(2)$ gives solutions for $0x^2+bx+c=0$!</p> <blockquote> <p><strong>Question:</strong></p> </blockquote> <ol> <li>How would you prove $(2)$?</li> <li>Why is $(2)$ somewhat similar to $\dfrac 1{(1)}$ but with $2a$ replaced with $-2c$?</li> </ol>
[ { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" }, { "answer_id": 2105245, "score": 16, "text": "<p>You can write $ax^2+bx+c=0$ as $a+b(\\frac 1x)+c(\\frac 1x)^2=0,$ solve for $\\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p>\n" }, { "answer_id": 2105247, "score": 12, "text": "<p>Take (2), and rationalize the denominator: </p>\n\n<p>$$\\frac{-2c}{b \\pm \\sqrt{b^2-4ac}} = \\frac{-2c}{b \\pm \\sqrt{b^2-4ac}}\\frac{b \\mp \\sqrt{b^2-4ac}}{b \\mp \\sqrt{b^2-4ac}} = $$\n$$\\frac{-2c(b\\mp\\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \\frac{-b \\mp \\sqrt{b^2-4ac}}{2a}$$</p>\n" }, { "answer_id": 2105283, "score": 9, "text": "<p>Just for the fun trivia, here are some other \"quadratic formulas\" $(a&gt;0):$</p>\n\n<p>for $|\\frac{b^2}{2a}-2c-1|\\le1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cos\\left(\\frac12\\arccos\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>for $\\frac{b^2}{2a}-2c-1&gt;1:$</p>\n\n<p>$$x=-\\frac b{2a}\\pm\\frac1{\\sqrt a}\\cosh\\left(\\frac12\\operatorname{arccosh}\\left(\\frac{b^2}{2a}-2c-1\\right)\\right)$$</p>\n\n<p>You can play around with <a href=\"https://www.desmos.com/calculator/sjrialidhy\">some graphs here</a> if you want.</p>\n" }, { "answer_id": 2105265, "score": 2, "text": "<p>There are a few errors in the question as posed.</p>\n\n<p>First, the quadratic formula is\n$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a}$$\nNote the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p>\n\n<p>Second, you write:</p>\n\n<blockquote>\n <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p>\n</blockquote>\n\n<p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\\ne 0$).</p>\n\n<p>In any case, to turn (1) into (2), just multiply by $-b \\mp \\sqrt{b^2-4ac}$ in both the numerator and the denominator:</p>\n\n<p>$$x = \\frac{-b \\pm \\sqrt {b^2-4ac} }{2a} \\cdot \\frac{-b \\mp \\sqrt{b^2-4ac}}{-b \\mp \\sqrt{b^2-4ac}}$$\nThe numerator now has the form $(X \\pm Y)(Y \\pm X)$, which simplifies to just $X^2-Y^2$. So we have\n$$x = \\frac{1}{2a} \\frac{ \\left( -b \\right)^2 - \\left(b^2-4ac\\right)}{-b \\mp \\sqrt{b^2-4ac}}$$\nCleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p>\n\n<p>Finally, you ask</p>\n\n<blockquote>\n <p>Why is (2) somewhat similar to $\\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p>\n</blockquote>\n\n<p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\\ne 0$), then we can divide the entire equation through by $x^2$, obtaining\n$$c \\left(\\frac{1}{x}\\right)^2 + b\\left(\\frac1x\\right) + a = 0$$\nIf we set $u=\\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us\n$$u = \\frac{ - b \\pm \\sqrt{b^2 - 4ac} }{2c}$$\nFinally we get\n$$x = \\frac{2c}{-b \\pm \\sqrt{b^2-4ac}}$$\nand you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>\n" } ]
Question: Proving an alternate quadratic formula Answer 1: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 2: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 3: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 4: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 5: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 6: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 7: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 8: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 9: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 10: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 11: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 12: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 13: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 14: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 15: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 16: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 17: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 18: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 19: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 20: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 21: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 22: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 23: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 24: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 25: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 26: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 27: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 28: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 29: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 30: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 31: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 32: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 33: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 34: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 35: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 36: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 37: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 38: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 39: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 40: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 41: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 42: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 43: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 44: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 45: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 46: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 47: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 48: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 49: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 50: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 51: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 52: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 53: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 54: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 55: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 56: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 57: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 58: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 59: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 60: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p> Answer 61: <p>You can write $ax^2+bx+c=0$ as $a+b(\frac 1x)+c(\frac 1x)^2=0,$ solve for $\frac 1x$, then invert it. You can have the minus sign top or bottom as you like by multiplying top and bottom by $-1$</p> Answer 62: <p>Take (2), and rationalize the denominator: </p> <p>$$\frac{-2c}{b \pm \sqrt{b^2-4ac}} = \frac{-2c}{b \pm \sqrt{b^2-4ac}}\frac{b \mp \sqrt{b^2-4ac}}{b \mp \sqrt{b^2-4ac}} = $$ $$\frac{-2c(b\mp\sqrt{b^2-4ac})}{b^2-(b^2-4ac)} = \frac{-b \mp \sqrt{b^2-4ac}}{2a}$$</p> Answer 63: <p>Just for the fun trivia, here are some other "quadratic formulas" $(a&gt;0):$</p> <p>for $|\frac{b^2}{2a}-2c-1|\le1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cos\left(\frac12\arccos\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>for $\frac{b^2}{2a}-2c-1&gt;1:$</p> <p>$$x=-\frac b{2a}\pm\frac1{\sqrt a}\cosh\left(\frac12\operatorname{arccosh}\left(\frac{b^2}{2a}-2c-1\right)\right)$$</p> <p>You can play around with <a href="https://www.desmos.com/calculator/sjrialidhy">some graphs here</a> if you want.</p> Answer 64: <p>There are a few errors in the question as posed.</p> <p>First, the quadratic formula is $$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a}$$ Note the factor of $a$ in the denominator, missing from the OP. [<strong>EDIT</strong> -- I see this has been fixed now.]</p> <p>Second, you write:</p> <blockquote> <p>In fact, (2) gives solutions for $0x^2+bx+c=0$!</p> </blockquote> <p>This is not true, and does not even make sense. The equation $0x^2+bx+c=0$ is equivalent to $bx = -c$, whose only solution is $x=-c/b$ (assuming $b\ne 0$).</p> <p>In any case, to turn (1) into (2), just multiply by $-b \mp \sqrt{b^2-4ac}$ in both the numerator and the denominator:</p> <p>$$x = \frac{-b \pm \sqrt {b^2-4ac} }{2a} \cdot \frac{-b \mp \sqrt{b^2-4ac}}{-b \mp \sqrt{b^2-4ac}}$$ The numerator now has the form $(X \pm Y)(Y \pm X)$, which simplifies to just $X^2-Y^2$. So we have $$x = \frac{1}{2a} \frac{ \left( -b \right)^2 - \left(b^2-4ac\right)}{-b \mp \sqrt{b^2-4ac}}$$ Cleaning up, the $(-b)^2-b^2$ in the numerator cancels out; the $2a$ in the denominator reduces against the $4ac$ in the numerator, leaving just $2c$; and you can move a negative sign out of the denominator and into the numerator, giving the alternative form of the quadratic formula you wanted.</p> <p>Finally, you ask</p> <blockquote> <p>Why is (2) somewhat similar to $\frac{1}{(1)}$ but with $2a$ replaced with $−2c$?</p> </blockquote> <p>As Ross Millikan says in his answer, if in the original equation $ax^2 + bx + c =0$ we assume that $x=0$ is not a solution (which is equivalent to assuming that $c\ne 0$), then we can divide the entire equation through by $x^2$, obtaining $$c \left(\frac{1}{x}\right)^2 + b\left(\frac1x\right) + a = 0$$ If we set $u=\frac1x$, then this is $cu^2 + bu + a = 0$, and the quadratic equation tells us $$u = \frac{ - b \pm \sqrt{b^2 - 4ac} }{2c}$$ Finally we get $$x = \frac{2c}{-b \pm \sqrt{b^2-4ac}}$$ and you can pull the negative sign out of the denominator into the numerator. So the reason the equations are so similar is because of a duality in the equation: interchanging $a$ with $c$ and simultaneously replacing $x$ with $1/x$ changes one quadratic equation into an equivalent one.</p>
Learning algebraic groups through examples
2,106,267
8
https://math.stackexchange.com/questions/2106267/learning-algebraic-groups-through-examples
<p>I am seeking for a good first reference on algebraic groups, or even linear algebraic groups, where the general theory could be understood through example for the classical groups.</p> <p>Understanding the theoretical definitions of tori, unipotent radical, reductive group, root system, parabolic subgroups, Levi decomposition, etc. is one thing. I always find a huge and sometimes unbridgeable gap between those definitions and formal manipulations and... what could I say for a specific example, say some classical linear algebraic groups. I found the notes of Murnaghan in the Clay Math Proceedings which are really understandable and works out lots of examples. However, in such expository notes of some pages the general theory, motivations and further examples are lacking, and I would like to find something more comprehensive to begin studying (and practicing!) algebraic groups.</p> <blockquote> <p>So I wonder, what reference could you advise for learning algebraic group, illustrated by lots of computable examples?</p> </blockquote> <p>Here are some precisions: I work with automorphic forms, and consider mainly adelic points of an algebraic group as well as its local components. I would like to understand how to deal with and build an intuition on what that means. For instance: what are the relations between compacity and existence of parabolic subgroups? why are the tori so central and important? what are the usual isomorphism between quotients and sequences? etc. I believe I can encapsulate it in: understanding what is exactly behind Murnaghan's notes for number theorists?</p>
[ { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" }, { "answer_id": 2149354, "score": 2, "text": "<p><a href=\"http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf\" rel=\"nofollow noreferrer\">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p>\n" }, { "answer_id": 2308704, "score": 0, "text": "<p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>\n" } ]
Question: Learning algebraic groups through examples Answer 1: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 2: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 3: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 4: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 5: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 6: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 7: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 8: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 9: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 10: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 11: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 12: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 13: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 14: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 15: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 16: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 17: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 18: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 19: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 20: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 21: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 22: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 23: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 24: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 25: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 26: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 27: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 28: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 29: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 30: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p> Answer 31: <p><a href="http://web.stanford.edu/~makisumi/articles/reductivegroups.pdf" rel="nofollow noreferrer">These</a> notes, especially the worked examples, helped me a lot when I needed to get up to speed with algebraic groups without having the time to learn the theory in depth and from scratch.</p> Answer 32: <p>Algebraic groups are mainly used as a language, not an end in itself, when talking about automorphic forms. You should think of Murnaghan's article as introducing the vocabulary. The tori are important for defining the root decomposition as well as characters and cocharacters, which appear in the definition of root datum. Unfortunately I've looked and looked and for automorphic forms, the expositions that exist mainly consist of lecture notes and research articles. I don't know of any comprehensive source of information. There is certainly a need for it, since I think that the biggest stumbling block for number theorists in coming to automorphic forms may simply just be the language of algebraic groups.</p>
Does $\sum_{n=2}^{\infty}\frac{n^4}{\log(1)+\log(2)+\log(3)+\cdots+\log(n)}$ converge?
2,106,284
8
https://math.stackexchange.com/questions/2106284/does-sum-n-2-infty-fracn4-log1-log2-log3-cdots-logn-con
<p>I can't find a way to test the convergence/divergence of this series:</p> <p>$$ \sum_{n=2}^{\infty}\frac{n^4}{\log(1)+\log(2)+\log(3)+\cdots+\log(n)} $$</p> <p>I tried the Cauchy method but in order to make the logarithms more manageable I grouped them all (so $\log(1)+\log(2)+\log(3)+...+\log(n)=\log(n!)$. The problem is, I don't know how to differentiate that when I need to. So I'd be grateful for some help if someone can think of a different way or just a way to improve mine (using the Cauchy method somehow so that it works).</p>
[ { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" }, { "answer_id": 2106315, "score": 11, "text": "<p><em>Without Stirling.</em></p>\n\n<p>Note that the denominator is $$\n\\sum_{k=1}^n \\log k \\leq \\sum_{k=1}^n \\log n = n\\log n\n$$\nfrom which you can lower bound the general term of your series by $$a_n\\stackrel{\\rm def}{=} \\frac{n^4}{\\log 1+\\log 2+\\dots+\\log n} \\geq \\frac{n^3}{\\log n}\\xrightarrow[n\\to\\infty]{} \\infty$$\nand therefore the series $\\sum_n a_n$ diverges, as its general term does not even go to $0$.</p>\n" }, { "answer_id": 2106312, "score": 7, "text": "<p>It is easy to show that $n!\\le n^n$.</p>\n\n<p>Therefore, $\\log(n!)\\le n\\log(n)$ and we have</p>\n\n<p>$$\\sum_{n=2}^N \\frac{n^4}{\\log(n!)}\\ge \\sum_{n=2}^N \\frac{n^3}{\\log(n)}\\to \\infty \\,\\,\\text{as}\\,\\,N\\to \\infty$$</p>\n" }, { "answer_id": 2106292, "score": 3, "text": "<p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p>\n\n<p>First note that $\\log(n!) \\approx n\\log(n) - n$ by the Stirling's formula.</p>\n\n<p>Then $a_n = \\frac{n^4}{\\log(n!)} \\to \\frac{n^4}{n\\log(n) - n} = \\frac{n^3}{\\log(n) - 1} &gt; \\frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>\n" } ]
Question: Does $\sum_{n=2}^{\infty}\frac{n^4}{\log(1)+\log(2)+\log(3)+\cdots+\log(n)}$ converge? Answer 1: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 2: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 3: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 4: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 5: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 6: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 7: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 8: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 9: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 10: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 11: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 12: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 13: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 14: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 15: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 16: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 17: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 18: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 19: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 20: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 21: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 22: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 23: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 24: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 25: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 26: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 27: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 28: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 29: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 30: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 31: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 32: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 33: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 34: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 35: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 36: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 37: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 38: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 39: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 40: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 41: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 42: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 43: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 44: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 45: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p> Answer 46: <p><em>Without Stirling.</em></p> <p>Note that the denominator is $$ \sum_{k=1}^n \log k \leq \sum_{k=1}^n \log n = n\log n $$ from which you can lower bound the general term of your series by $$a_n\stackrel{\rm def}{=} \frac{n^4}{\log 1+\log 2+\dots+\log n} \geq \frac{n^3}{\log n}\xrightarrow[n\to\infty]{} \infty$$ and therefore the series $\sum_n a_n$ diverges, as its general term does not even go to $0$.</p> Answer 47: <p>It is easy to show that $n!\le n^n$.</p> <p>Therefore, $\log(n!)\le n\log(n)$ and we have</p> <p>$$\sum_{n=2}^N \frac{n^4}{\log(n!)}\ge \sum_{n=2}^N \frac{n^3}{\log(n)}\to \infty \,\,\text{as}\,\,N\to \infty$$</p> Answer 48: <p>We can use the comparison test to show it diverges and we will be using the series $b_n = n^2$.</p> <p>First note that $\log(n!) \approx n\log(n) - n$ by the Stirling's formula.</p> <p>Then $a_n = \frac{n^4}{\log(n!)} \to \frac{n^4}{n\log(n) - n} = \frac{n^3}{\log(n) - 1} &gt; \frac{n^3}{n} = n^2$ which diverges. Then, your series must also diverge.</p>
If $a+b=10^{50}$ then show $10|a$
2,107,350
8
https://math.stackexchange.com/questions/2107350/if-ab-1050-then-show-10a
<blockquote> <p>Suppose $a$ is a positive integer and another positive integer $b$ is obtained by jumbling up the digits of $a$. Suppose $a+b=10^{50}$. Then show that $10|a$.</p> </blockquote> <p>I first looked at what happens if $a+b=100$. Letting $a=10x+y$, if $b\neq a$ then $b=10y+x$ implying $11(x+y)=100$, a contradiction. So $a=b$ meaning $a=50$ and hence $10|a$.</p> <p>Then I looked at $a+b=10^3$. Here the situation is more complex, and I don't really see any easy pattern. Of course I also noted that if $10|a$ then $10|b$ also, so essentially by dividing both sides of $a+b=10^3$ by $10$ we can get $a^*+b^*=100$ and we are back in original situation, which has already been solved i.e. $a^*=50$ implying $a=500$ implying $10|a$ again.</p> <p>So somehow I have to show that if $a+b=10^3$ holds, then $a=b$. </p> <p>If I can tackle the smaller cases, I think I will be more capable of understanding what happens to $a+b=10^{50}$.</p> <p>Oh, I also know that if $a$ and $b$ are numbers with same digits, then $9|a-b$. However, I don't know what else I can deduce.</p>
[ { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" }, { "answer_id": 2107358, "score": 6, "text": "<p>Assume $a \\not \\equiv 0 \\pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p>\n\n<p>Let $$a=\\sum_{i=0}^{49}a_{i}10^{i}$$\nThen let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$. </p>\n\n<p>So $$10^{50}=\\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p>\n\n<p>Since $a_{0} \\neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \\ge i \\ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \\dots +9= 9 \\times 49+10 =451$$\nHowever, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \\dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \\dots, a_{49})$, we have $$451=\\sum_{i=i}^{49}(a_{i}+b_{i})=2\\sum_{i=0}^{49}a_{i}$$\nA contradiction, since $\\sum\\limits_{i=0}^{49}a_{i}$ is a natural. </p>\n" }, { "answer_id": 2107531, "score": 1, "text": "<p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p>\n\n<p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Step 2: show that the last digit of $a$ cannot be $5$.</p>\n\n<p>Now step 1: </p>\n\n<p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p>\n\n<p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p>\n\n<p>Now step 2:</p>\n\n<p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p>\n\n<p>$$\\overline{a5}+\\overline{b5}=10^{50}$$$$\\Rightarrow\\overline{a0}+\\overline{b0}=\\underbrace{999...99}_{10^{50}-1}0$$$$\\Rightarrow\\overline{a}+\\overline{b}=\\underbrace{999...99}_{10^{50}-1}$$</p>\n\n<p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p>\n\n<p>We have now proved that the last digit of $a$ can only be $0$.</p>\n" } ]
Question: If $a+b=10^{50}$ then show $10|a$ Answer 1: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 2: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 3: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 4: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 5: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 6: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 7: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 8: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 9: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 10: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 11: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 12: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 13: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 14: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 15: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 16: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 17: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 18: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 19: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 20: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 21: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 22: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 23: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 24: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 25: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 26: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 27: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 28: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 29: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 30: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p> Answer 31: <p>Assume $a \not \equiv 0 \pmod {10}$. Note that $a$ must be smaller than $10^{50}$, yet bigger than $10^{49}$. </p> <p>Let $$a=\sum_{i=0}^{49}a_{i}10^{i}$$ Then let $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ be the digits of $b$ in order, and because of the definition of $b$ it follows that they are a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$. </p> <p>So $$10^{50}=\sum_{i=i}^{49}(a_{i}+b_{i})10^{i}$$</p> <p>Since $a_{0} \neq 0$, we have that $a_{0}+b_{0}=10$. Because of carrying, we can use this to claim that $a_{1}+b_{1}=9$, and so on. Eventually, we note that for $n \ge i \ge 1 $ we have $a_{i}+b_{i}=9$. So we have $$\sum_{i=i}^{49}(a_{i}+b_{i})=10 +9 +9+ \dots +9= 9 \times 49+10 =451$$ However, since $(b_{0}, b_{1}, b_{2}, b_{3}, b_{4}, \dots, b_{49})$ is a rearrangement of $(a_{0}, a_{1}, a_{2}, a_{3}, a_{4}, \dots, a_{49})$, we have $$451=\sum_{i=i}^{49}(a_{i}+b_{i})=2\sum_{i=0}^{49}a_{i}$$ A contradiction, since $\sum\limits_{i=0}^{49}a_{i}$ is a natural. </p> Answer 32: <p>(I'd like to post an answer using more intuition and less fancy technique. The proof may look a bit weird at first, but I think it is valid. )</p> <p>Step 1: show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Step 2: show that the last digit of $a$ cannot be $5$.</p> <p>Now step 1: </p> <p>Suppose the last digit of $a$ is $2$. Then the last digit of $b$ has to be $8$. Because $a$ is essentially a permutation of digits of $b$, so some digit of $a$ must be $8$, say its the $K^{th}$ digit of $a$. Then the $K^{th}$ of $b$ must be $1$ because $a$ and $b$ add up to $10^{50}$. By similar reasoning, $1$ must be some digit of $b$, say the $L^{th}$ digit of $b$. Then the $L^{th}$ digit of $a$ has to be $8$. In this way, number $1$ and $8$ reappears reiterately and number $2$ will never appear again. (You can try a few steps to understand the pairing system.) So $a$ is not a permutation of digits of $b$. A contradiction. </p> <p>Using similar ways, we can show that the last digit of $a$ can only be $0$ or $5$.</p> <p>Now step 2:</p> <p>Suppose the last digit of $a$ is 5. Then the last digit of $b$ is 5. Then we have:</p> <p>$$\overline{a5}+\overline{b5}=10^{50}$$$$\Rightarrow\overline{a0}+\overline{b0}=\underbrace{999...99}_{10^{50}-1}0$$$$\Rightarrow\overline{a}+\overline{b}=\underbrace{999...99}_{10^{50}-1}$$</p> <p>We are going to prove that the last equation has no solution. Apply the same method in step 1. Suppose the last digit of $a$ is $3$. Then the last digit of $b$ is $6$. Then some digit of $a$ is $6$. The pattern in this pairing system is, $3$ and $6$ appear alternately. Note that $(10^{50}-1)$ is an odd number. So at last, there will be either one more $3$ or one more $6$. So $a$ is not a permutation of $b$. Applying similar reasonings, we have shown that the last digit of $a$ cannot be $1, 2, 3, 4, 5, 6, 7, 8, 9, 0$. So $a$ doesn't exist. </p> <p>We have now proved that the last digit of $a$ can only be $0$.</p>
Lagrange&#39;s theorem and divisibility consequences.
2,107,950
8
https://math.stackexchange.com/questions/2107950/lagranges-theorem-and-divisibility-consequences
<p>There are some simple, but sometimes intriguing, divisibility statements that are straightforward consequences of Lagrange's theorem. For instance:</p> <ul> <li>$p$ divides $a^{p-1}-1$ (Fermat's little thm)</li> <li>$n!$ divides $(p^n-1)(p^n-p)\cdots(p^{n}-p^{n-1}).$</li> </ul> <p>The latter one can be derived from the fact that $S_n \hookrightarrow GL_{n}(\mathbb{F}_p)$.</p> <p>I've noticed that simple examples like those can be very compelling for students (begginers).</p> <p><strong>Question</strong>: Are there more interesting divisibility statements that are immediate conseguences of Lagranges' thm? That is, coming from the simple fact a group $H$ is a subgroup of a finite group $G$?</p>
[ { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" }, { "answer_id": 2107952, "score": 3, "text": "<p>I know one, although it isn't terribly exciting.</p>\n\n<p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\\{1,2,\\dots kn\\}$ such that it permutes the groups $\\{1,2,3,\\dots n\\},\\{n+1,\\dots, 2n\\} , \\dots , \\{kn-k+1,\\dots,kn\\}$ internally and externally.</p>\n" }, { "answer_id": 2107958, "score": 2, "text": "<p>For $a,n \\geq 1$, $n\\mid\\phi(a^n - 1)$ ($\\phi$ being Euler's $\\phi$-function). This follows from noting that the order of $a$ in $\\left(\\Bbb Z/(a^n-1)\\Bbb Z\\right)^\\times$ is $n$.</p>\n" } ]
Question: Lagrange&#39;s theorem and divisibility consequences. Answer 1: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 2: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 3: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 4: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 5: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 6: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 7: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 8: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 9: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 10: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 11: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 12: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 13: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 14: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 15: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 16: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 17: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 18: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 19: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 20: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 21: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 22: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 23: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 24: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 25: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 26: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 27: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 28: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 29: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 30: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p> Answer 31: <p>I know one, although it isn't terribly exciting.</p> <p>We can prove that $k!n!^k$ is a divisor of $(nk)!$ by noting that the first is the order of the subgroup of the permutations of $\{1,2,\dots kn\}$ such that it permutes the groups $\{1,2,3,\dots n\},\{n+1,\dots, 2n\} , \dots , \{kn-k+1,\dots,kn\}$ internally and externally.</p> Answer 32: <p>For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.</p>
How did Euler prove this identity?
2,108,245
8
https://math.stackexchange.com/questions/2108245/how-did-euler-prove-this-identity
<p>While studying Fourier analysis last semester, I saw an interesting identity:</p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}$$ whenever $\alpha \in \mathbb{C}\setminus \mathbb{Z}$, which I learned two proofs using Fourier series and residue calculus.</p> <p>More explicitly, we can deduce the theorem using Fourier series of $f(\theta)=e^{i(\pi - \theta)\alpha}$ on $[0,2\pi]$ or contour integral of the function $g(z)=\frac{\pi}{(z^2-\alpha^2)\tan\pi z}$ along large circles.</p> <p>But these techniques, as long as I know, wasn't fully developed at Euler's time. </p> <p>So what was Euler's method to prove this identity? Is there any proof at elementary level?</p>
[ { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" }, { "answer_id": 2108301, "score": 12, "text": "<p>According to Wikipedia (<a href=\"https://en.wikipedia.org/wiki/Basel_problem\">https://en.wikipedia.org/wiki/Basel_problem</a>),\n Euler was the first to give a representation of the sine function as an infinite product:\n$$(*) \\hspace{2cm}\\sin (\\pi \\alpha)=\\pi \\alpha \\prod\\limits_{n=1}^{\\infty}(\\frac{n^2-\\alpha^2}{n^2}),$$\nwhich was formally proved by Weierstrass about 100 years later.</p>\n\n<p>Now taking \"$\\ln$\" on by sides of (*) gives\n$$\\ln(\\sin (\\pi \\alpha))=\\ln(\\pi \\alpha)+ \\sum \\limits_{n=1}^{\\infty} \\ln (\\frac{n^2-\\alpha^2}{n^2}),$$\nand after taking derivatives on both sides we arrive at </p>\n\n<p>$$\\sum_{n=1}^{\\infty}\\frac{1}{n^2-\\alpha^2}=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}.$$</p>\n" }, { "answer_id": 2108251, "score": 4, "text": "<p>For the partial fraction decomposition of the cotangent\n$$\\pi \\cot \\pi z = \\frac{1}{z} + \\sum_{n\\ge 1}\\left( \\frac{1}{z-n} + \\frac{1}{z+n}\\right) = \\lim_{k\\to\\infty} \\sum_{n=-k}^k \\frac{1}{z-n}$$\nhence\n\\begin{align*}\n\\sum_{n=1}^\\infty \\frac{1}{n^2-\\alpha ^2}\n&amp;= \\lim_{k\\to\\infty}\\sum_{n=1}^k \\frac{1}{n^2-\\alpha^2}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\sum_{n=1}^k \\frac{1}{\\alpha-n} + \\frac{1}{\\alpha+n}\\\\\n&amp;= -\\frac{1}{2\\alpha}\\lim_{k\\to\\infty} \\left(-\\frac{1}{\\alpha} +\\sum_{n=-k}^k \\frac{1}{\\alpha-n}\\right)\\\\\n&amp;= -\\frac{1}{2\\alpha}\\left(\\pi \\cot \\pi \\alpha - \\frac{1}{\\alpha}\\right)\\\\\n&amp;= \\frac{1-\\pi \\alpha\\cot \\pi \\alpha}{2\\alpha^2}\\\\\n&amp;=\\frac{1}{2\\alpha^2}-\\frac{\\pi}{2\\alpha\\tan\\pi\\alpha}\n\\end{align*}</p>\n" } ]
Question: How did Euler prove this identity? Answer 1: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 2: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 3: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 4: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 5: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 6: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 7: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 8: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 9: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 10: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 11: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 12: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 13: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 14: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 15: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 16: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 17: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 18: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 19: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 20: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 21: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 22: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 23: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 24: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 25: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 26: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 27: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 28: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 29: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 30: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p> Answer 31: <p>According to Wikipedia (<a href="https://en.wikipedia.org/wiki/Basel_problem">https://en.wikipedia.org/wiki/Basel_problem</a>), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later.</p> <p>Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at </p> <p>$$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$</p> Answer 32: <p>For the partial fraction decomposition of the cotangent $$\pi \cot \pi z = \frac{1}{z} + \sum_{n\ge 1}\left( \frac{1}{z-n} + \frac{1}{z+n}\right) = \lim_{k\to\infty} \sum_{n=-k}^k \frac{1}{z-n}$$ hence \begin{align*} \sum_{n=1}^\infty \frac{1}{n^2-\alpha ^2} &amp;= \lim_{k\to\infty}\sum_{n=1}^k \frac{1}{n^2-\alpha^2}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \sum_{n=1}^k \frac{1}{\alpha-n} + \frac{1}{\alpha+n}\\ &amp;= -\frac{1}{2\alpha}\lim_{k\to\infty} \left(-\frac{1}{\alpha} +\sum_{n=-k}^k \frac{1}{\alpha-n}\right)\\ &amp;= -\frac{1}{2\alpha}\left(\pi \cot \pi \alpha - \frac{1}{\alpha}\right)\\ &amp;= \frac{1-\pi \alpha\cot \pi \alpha}{2\alpha^2}\\ &amp;=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha} \end{align*}</p>
Can cohomologies always be defined as the quotient of the kernel by the image of a certain operation?
2,110,111
8
https://math.stackexchange.com/questions/2110111/can-cohomologies-always-be-defined-as-the-quotient-of-the-kernel-by-the-image-of
<p>For example, concerning de Rham cohomology as explained on the lower part of page 5 <a href="http://www.math.uni-hamburg.de/home/wockel/teaching/data/cohomology_of_lie_algebras_mavraj.pdf">here</a>, one considers the vector space of all forms on a manifold $M$, $A_{DR}(M)$ in the notation of the paper (?).</p> <p>The $p$-th de Rham cohomology is then defined as the quotient of the kernel and the image of the exterior derivative $d$ acting on the space of $p$ and $p-1$ forms respectively as</p> <p>$$ H_{DR}^p(M) = \frac{Ker(d: A_{DR}^{p}(M)\rightarrow A_{DR}^{p+1}(M))} {Im(d: A_{DR}^{p-1}(M)\rightarrow A_{DR}^{p}(M))} $$</p> <p>Can cohomology always be defined as the kernel divided by the image of a certain operation?</p> <p>What does cohomology mean or "measure" in as simple as possible intuitive terms in the de Rham case but also more generally?</p> <p>PS: I tried to read wikipedia of course, but it was (not yet?) very enlightening for me ...</p>
[ { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" }, { "answer_id": 2110147, "score": 12, "text": "<p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p>\n\n<p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p>\n\n<p>In the case of de Rham, you can think of it as measuring \"holes\" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\\mathbb{R}$, so it is detecting the higher dimensional \"hole\" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p>\n\n<p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p>\n" }, { "answer_id": 2110568, "score": 3, "text": "<p>A <em>cochain complex</em> $(A^{\\bullet}, d^{\\bullet})$ is a sequence of abelian groups $\\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \\dots$ and group homomorphisms $d^k : A^k \\to A^{k+1}$ such that $d^k\\circ d^{k-1} = 0$. That is, we have a complex </p>\n\n<p>$$\\dots \\xrightarrow{d^{-3}} A^{-2} \\xrightarrow{d^{-2}} A^{-1} \\xrightarrow{d^{-1}} A^0 \\xrightarrow{d^0} A^1 \\xrightarrow{d^1} A^2 \\xrightarrow{d^2} \\dots$$</p>\n\n<p>As $d^k\\circ d^{k-1} = 0$, $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$. Furthermore, as the group $A^k$ is abelian, $\\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) := \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k}.$$</p>\n\n<p>This is called the $k^{\\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\\bullet}, d^{\\bullet})$. </p>\n\n<p>Note that $H^k(A^{\\bullet}, d^{\\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\\operatorname{im} d^{k-1} = \\ker d^k$). So we can view $H^k(A^{\\bullet}, d^{\\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\\operatorname{im} d^{k-1} \\subseteq \\ker d^k$ is to being an equality (the bigger $H^k(A^{\\bullet}, d^{\\bullet})$ is, the more elements of $\\ker d^k$ there are which are not in $\\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\\ker d^k$, and $\\operatorname{im} d^{k-1}$ are all groups, $\\ker d^k\\setminus \\operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\\bullet}, d^{\\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p>\n\n<p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\\bullet}_{DR}(M), d^{\\bullet})$ where $d^k : A^k_{DR}(M) \\to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\\ker d^k$ is the set of closed $k$-forms and $\\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement \"every closed $k$-form is exact\" is to being true.</p>\n\n<p>Your first question seems to be: </p>\n\n<blockquote>\n <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p>\n</blockquote>\n\n<p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p>\n\n<p>While there are many different ways of defining a collection of cohomology groups $\\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \\dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\\bullet}, d^{\\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p>\n\n<p>$$H^k(A^{\\bullet}, d^{\\bullet}) = \\frac{\\ker d^k : A^k \\to A^{k+1}}{\\operatorname{im} d^{k-1} : A^{k-1} \\to A^k} = \\frac{A^k}{0} \\cong A^k = H^k.$$</p>\n" }, { "answer_id": 2110814, "score": 3, "text": "<p>$\\newcommand{\\dd}{\\partial}\\newcommand{\\Brak}[1]{\\left\\langle #1\\right\\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p>\n\n<ul>\n<li>$p$-dimensional de Rham coholomogy measures \"how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains\". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li>\n</ul>\n\n<p>If $X$&nbsp;is a <a href=\"https://en.wikipedia.org/wiki/Singular_homology\" rel=\"nofollow noreferrer\">smooth $k$-chain</a> and $\\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let\n$$\n\\Brak{X, \\omega} = \\begin{cases}\n \\int_{X} \\omega &amp; k = p, \\\\\n 0 &amp; k \\neq p,\n\\end{cases}\n$$\ndenote the \"integration pairing\". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives\n$$\n\\Brak{\\dd X, \\omega} = \\int_{\\dd X} \\omega = \\int_{X} d\\omega = \\Brak{X, d\\omega}.\n$$\nWith respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p>\n\n<ol>\n<li><p>A <em>closed</em> form&nbsp;$\\omega$ (i.e., with $d\\omega = 0$) satisfies\n$$\n\\Brak{\\dd X, \\omega} = \\Brak{X, d\\omega} = 0.\n$$\nIn words, the integral of&nbsp;$\\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \\dd Y$), then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n<li><p>An <em>exact</em> $p$-form $\\omega$ (i.e., with $\\omega = d\\eta$ for some $(p - 1)$-form $\\eta$) satisfies\n$$\n\\Brak{X, \\omega} = \\Brak{X, d\\eta} = \\Brak{\\dd X, \\eta}.\n$$\nParticularly, the integral of&nbsp;$\\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then\n$$\n\\Brak{X_{1}, \\omega} = \\Brak{X_{2}, \\omega}.\n$$</p></li>\n</ol>\n\n<p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>\n" } ]
Question: Can cohomologies always be defined as the quotient of the kernel by the image of a certain operation? Answer 1: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 2: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 3: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 4: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 5: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 6: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 7: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 8: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 9: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 10: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 11: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 12: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 13: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 14: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 15: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 16: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 17: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 18: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 19: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 20: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 21: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 22: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 23: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 24: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 25: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 26: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 27: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 28: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 29: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 30: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 31: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 32: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 33: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 34: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 35: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 36: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 37: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 38: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 39: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 40: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 41: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 42: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 43: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 44: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 45: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p> Answer 46: <p>Cohomology/homology is always defined this way. A chain complex is a vector space equipped with a linear map $d:V\to V$ such that $d^2=0$. This ensures that the image is a subspace of the kernel so that the quotient is defined, and this quotient is the homology of the chain complex.</p> <p>There are many homology theories, some are closely related and some are completely separate. For instance, de Rham cohomology, singular cohomology, simplicial cohomology, cellular cohomology and Morse cohomology are all isomorphic whenever they all make sense. However, there are plenty of other things, like group homology, Hochschild homology, which have nothing much to do with topology and are purely algebraic. And other theories like Khovanov homology, which is an invariant of knots.</p> <p>In the case of de Rham, you can think of it as measuring "holes" of certain dimensions. For example, the first de Rham cohomology of the circle is $H^1(S^1)=\mathbb{R}$ and higher cohomologies are $0$, but for a sphere we have a generator in the second cohomology $H^2(S^2)=\mathbb{R}$, so it is detecting the higher dimensional "hole" in $S^2$. The related family of homology theories (singular, cellular etc) all do the same thing.</p> <p>In general, there's no single answer to what a homology theory is measuring, since they arise in many settings for completely different purposes. And precisely what Khovanov homology is measuring is pretty much an open question.</p> Answer 47: <p>A <em>cochain complex</em> $(A^{\bullet}, d^{\bullet})$ is a sequence of abelian groups $\dots, A^{-2}, A^{-1}, A^0, A^1, A^2, \dots$ and group homomorphisms $d^k : A^k \to A^{k+1}$ such that $d^k\circ d^{k-1} = 0$. That is, we have a complex </p> <p>$$\dots \xrightarrow{d^{-3}} A^{-2} \xrightarrow{d^{-2}} A^{-1} \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} \dots$$</p> <p>As $d^k\circ d^{k-1} = 0$, $\operatorname{im} d^{k-1} \subseteq \ker d^k$. Furthermore, as the group $A^k$ is abelian, $\operatorname{im} d^{k-1}$ is a normal subgroup, so we can form the quotient group</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) := \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k}.$$</p> <p>This is called the $k^{\text{th}}$ <em>cohomology group</em> of the cochain complex $(A^{\bullet}, d^{\bullet})$. </p> <p>Note that $H^k(A^{\bullet}, d^{\bullet}) = 0$ if and only if the complex is <em>exact</em> at $A^k$ (i.e. $\operatorname{im} d^{k-1} = \ker d^k$). So we can view $H^k(A^{\bullet}, d^{\bullet})$ as a measure of how close the complex is to being exact at $A^k$ - that is, how close the inclusion $\operatorname{im} d^{k-1} \subseteq \ker d^k$ is to being an equality (the bigger $H^k(A^{\bullet}, d^{\bullet})$ is, the more elements of $\ker d^k$ there are which are not in $\operatorname{im} d^{k-1}$). From this point of view, it seems that considering $\ker d^k\setminus \operatorname{im} d^{k-1}$ is a much more natural thing to do. One problem with this approach is that while $A^{k-1}$, $A^k$, $\ker d^k$, and $\operatorname{im} d^{k-1}$ are all groups, $\ker d^k\setminus \operatorname{im} d^{k-1}$ is not a group whereas $H^k(A^{\bullet}, d^{\bullet})$ is. The group structure is very useful, which is why cohomology is defined as it is.</p> <p>The example you gave, de Rham cohomology, is the cohomology of the cochain complex $(A^{\bullet}_{DR}(M), d^{\bullet})$ where $d^k : A^k_{DR}(M) \to A^{k+1}_{DR}(M)$ is the exterior derivative. Note that $\ker d^k$ is the set of closed $k$-forms and $\operatorname{im} d^{k-1}$ is set of exact $k$-forms, so $H^k_{DR}(M)$ measures how close the statement "every closed $k$-form is exact" is to being true.</p> <p>Your first question seems to be: </p> <blockquote> <p>Do all cohomology groups arise from taking the cohomology of a cochain complex? </p> </blockquote> <p>What it means to be a cohomology group (or more precisely, a cohomology theory) can be defined axiomatically using the Eilenberg-Steenrod axioms.</p> <p>While there are many different ways of defining a collection of cohomology groups $\dots H^{-2}, H^{-1}, H^0, H^1, H^2, \dots$, some of which are not <em>defined</em> as the cohomology of a cochain complex, there does exist a cochain complex $(A^{\bullet}, d^{\bullet})$ whose cohomology groups are isomorphic to the given ones: just take $A^k = H^k$ and $d^k = 0$, then</p> <p>$$H^k(A^{\bullet}, d^{\bullet}) = \frac{\ker d^k : A^k \to A^{k+1}}{\operatorname{im} d^{k-1} : A^{k-1} \to A^k} = \frac{A^k}{0} \cong A^k = H^k.$$</p> Answer 48: <p>$\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$&nbsp;denotes a compact, smooth manifold of dimension&nbsp;$n$ (possibly disconnected), and differential forms are smooth. Briefly,</p> <ul> <li>$p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$&nbsp;contains". (William S. Massey's Algebraic Topology article in the <em>Encyclopedia Britannica</em> gives a clear, geometric introduction to homology.)</li> </ul> <p>If $X$&nbsp;is a <a href="https://en.wikipedia.org/wiki/Singular_homology" rel="nofollow noreferrer">smooth $k$-chain</a> and $\omega$&nbsp;is a smooth $p$-form on&nbsp;$M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega &amp; k = p, \\ 0 &amp; k \neq p, \end{cases} $$ denote the "integration pairing". If $X$&nbsp;is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint.</p> <ol> <li><p>A <em>closed</em> form&nbsp;$\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of&nbsp;$\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> <li><p>An <em>exact</em> $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of&nbsp;$\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$&nbsp;and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$</p></li> </ol> <p>Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item&nbsp;1 says that a <em>closed</em> $p$-form determines a real-valued function on <em>real homology classes</em>. Item&nbsp;2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de&nbsp;Rham cohomology classes.</p>
Is self-adjointness really a property of an operator, or of an operator and an inner product?
2,110,967
8
https://math.stackexchange.com/questions/2110967/is-self-adjointness-really-a-property-of-an-operator-or-of-an-operator-and-an-i
<p>On a Hilbert- (or otherwise inner-product-) space $\mathcal{H}$ with scalar product $S = \langle.|.\rangle_\mathcal{H}$, a self-adjoint operator is is readily defined as a linear mapping $A : \mathcal{H} \to \mathcal{H}$ with the property $$ S (A v, w) = S(v,A w) \quad \forall v,w\in \mathcal{H}. $$ Fair enough, but an inner product is a pretty specific structure on a space. Do we need it to define what self-adjoint means?</p> <p>It seems that an operator which is self-adjoint with respect to $S$ is also self-adjoint with respect to another scalar product $T$, but I really don't see how one could go about proving this.</p> <p>If so, would there be a definition of self-adjointness that makes no reference to any particular inner product? A useful candidate would be something like “a self-adjoint operator is one that has a system of eigenvectors $(\psi_i)_i$ which spans the entire space, such that $A(\psi_i) = \lambda_i\cdot \psi_i$ with real $\lambda$”. But is that actually equivalent to the usually given definition? The spectral theorem only goes one way, and makes itself reference to an inner product.</p>
[ { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" }, { "answer_id": 2111119, "score": 7, "text": "<p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\\operatorname{End}(V) \\cong V^{*} \\otimes V$ to $\\operatorname{Bil}(V \\times V, \\mathbb{R}) \\cong V \\otimes V$.</p>\n\n<p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\\operatorname{End}(V)$ and $\\operatorname{Bil}(V \\times V, \\mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint.\nThis is best seen by invoking the spectral theorem. </p>\n\n<p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\\lambda_1, \\dots, \\lambda_k$ the distinct eigenvalues of $T$. Then $V = \\bigoplus_{i=1}^k \\ker(T - \\lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\\ker(T - \\lambda_i I)$ won't be $g'$-orthogonal to some $\\ker(T - \\lambda_j I)$ (with $i \\neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p>\n\n<p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p>\n" }, { "answer_id": 2111151, "score": 0, "text": "<p>Start with a finite-dimensional space $X$ and a basis $\\{b_1,b_2,\\cdots,b_n\\}$. You can define an inner-product between $x,y \\in X$ by expanding\n$$\n x = \\alpha_1 b_1 + \\alpha_2 b_2 + \\cdots + \\alpha_n b_n \\\\\n y = \\beta_1 b_1 + \\beta_2 b_2 + \\cdots \\beta_n b_n\n$$\nand setting\n$$\n \\langle x,y\\rangle_{b} = \\sum_{j=1}^{n}\\alpha_j\\beta_j.\n$$\nIn this inner product, $\\{ b_1,b_2,\\cdots, b_n \\}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\\langle\\cdot,\\cdot\\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p>\n" }, { "answer_id": 2111164, "score": 0, "text": "<p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p>\n\n<p>To illustrate, consider the Euclidean inner product space $\\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\\langle x,y\\rangle_P=y^TPx$. Then a linear operator $L$ on $\\mathbb R^n$ is self-adjoint with respect to $\\langle\\cdot,\\cdot\\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p>\n\n<p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p>\n\n<ul>\n<li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{4&amp;2\\\\ 3&amp;6}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$;</li>\n<li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\\langle\\cdot,\\cdot\\rangle_P$, e.g. when $A=\\pmatrix{1\\\\ &amp;2}$ and $P=\\pmatrix{2&amp;1\\\\ 1&amp;2}$.</li>\n</ul>\n" } ]
Question: Is self-adjointness really a property of an operator, or of an operator and an inner product? Answer 1: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 2: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 3: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 4: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 5: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 6: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 7: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 8: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 9: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 10: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 11: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 12: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 13: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 14: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 15: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 16: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 17: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 18: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 19: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 20: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 21: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 22: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 23: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 24: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 25: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 26: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 27: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 28: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 29: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 30: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 31: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 32: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 33: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 34: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 35: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 36: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 37: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 38: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 39: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 40: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 41: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 42: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 43: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 44: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 45: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul> Answer 46: <p>Self-adjointness is the operator-theoretic version of symmetry of bilinear forms. The notion of a symmetric bilinear form can be defined without reference to any extra structure but in order to talk about self-adjoint operators, we need some mechanism to convert operators to bilinear forms. An inner product $g$ allows us to do just that. Namely, let $V$ be a finite dimensional real vector space and let $g$ be an inner product. Given a linear map $T$, we can define the bilinear form associated to $T$ to be $\beta^g_T(v,w) := g(Tv,w)$. This gives us a map (in fact, a linear isomorphism) from $\operatorname{End}(V) \cong V^{*} \otimes V$ to $\operatorname{Bil}(V \times V, \mathbb{R}) \cong V \otimes V$.</p> <p>The operator $T$ is $g$-self-adjoint if the associated bilinear form $\beta^g_T$ is symmetric. By changing $g$ we get different different maps between $\operatorname{End}(V)$ and $\operatorname{Bil}(V \times V, \mathbb{R})$ so $T$ might be $g$-self-adjoint but not $g'$-self-adjoint. This is best seen by invoking the spectral theorem. </p> <p>In the real case, we know that $T$ is $g$-self-adjoint if and only if $T$ is $g$-orthogonally diagonalizable. Denote by $\lambda_1, \dots, \lambda_k$ the distinct eigenvalues of $T$. Then $V = \bigoplus_{i=1}^k \ker(T - \lambda_i I)$ is a direct sum orthogonal decomposition of $V$. If we change the inner product $g$ to an inner product $g'$ such that some $\ker(T - \lambda_i I)$ won't be $g'$-orthogonal to some $\ker(T - \lambda_j I)$ (with $i \neq j$) then by the spectral theorem, $T$ won't be self-adjoint. This is always possible as long as $k &gt; 1$. In other words, the only operators $T$ which are self-adjoint with respect to all inner products are the scalar operators.</p> <p>In fact, a real operator $T$ is $g$-self-adjoint with respect to some inner product $g$ if and only if $T$ is diagonalizable. If $T$ is self-adjoint, it is orthogonally diagonalizable and in particular diagonalizable. For the other direction, if $T$ is diagonalizable we can always take an arbitrary basis of eigenvectors of $T$ and define an inner product $g$ by declaring this basis to be orthonormal - then by the spectral theorem $T$ will be $g$-orthogonal.</p> Answer 47: <p>Start with a finite-dimensional space $X$ and a basis $\{b_1,b_2,\cdots,b_n\}$. You can define an inner-product between $x,y \in X$ by expanding $$ x = \alpha_1 b_1 + \alpha_2 b_2 + \cdots + \alpha_n b_n \\ y = \beta_1 b_1 + \beta_2 b_2 + \cdots \beta_n b_n $$ and setting $$ \langle x,y\rangle_{b} = \sum_{j=1}^{n}\alpha_j\beta_j. $$ In this inner product, $\{ b_1,b_2,\cdots, b_n \}$ is an orthonormal basis of $X$. A matrix $A$ is selfadjoint on the inner product space $(X,\langle\cdot,\cdot\rangle_b)$ if, for example, the elements of $b$ are eigenvectors of $A$ with real eigenvalues. Not every diagonalizable matrix $D$ with real eigenvalues is selfadjoint with respect to a given inner product. However, using this procedure, you can find an inner product with respect to which $D$ <em>is</em> selfadjoint.</p> Answer 48: <p>That depends on the inner product, of course, because self-adjointness is all about preservation of inner product between two different bilinear transformations.</p> <p>To illustrate, consider the Euclidean inner product space $\mathbb R^n$. Let $P$ be a positive definite matrix and define an inner product $\langle x,y\rangle_P=y^TPx$. Then a linear operator $L$ on $\mathbb R^n$ is self-adjoint with respect to $\langle\cdot,\cdot\rangle_P$ if and only if its matrix $A$ in the standard basis satisfies $PA=A^TP$.</p> <p>Clearly not every matrix $L$ satisfies such a condition, and whether it is self-adjoint with respect to the Euclidean inner product (i.e. whether $A$ is real symmetric) or whether it is positive definite with respect to the Euclidean inner product (i.e. whether $A$ is a positive definite matrix) are completely irrelevant. It could happen that:</p> <ul> <li>$L$ is <strong>not</strong> self-adjoint w.r.t. the Euclidean inner product but it is self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{4&amp;2\\ 3&amp;6}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$;</li> <li>$L$ is self-adjoint and even positive definite w.r.t. the Euclidean inner product but it is <strong>not</strong> self-adjoint w.r.t. $\langle\cdot,\cdot\rangle_P$, e.g. when $A=\pmatrix{1\\ &amp;2}$ and $P=\pmatrix{2&amp;1\\ 1&amp;2}$.</li> </ul>
How to prove that $\frac{1000!}{(500!)^2}$ is not divisible by 7?
2,111,927
8
https://math.stackexchange.com/questions/2111927/how-to-prove-that-frac10005002-is-not-divisible-by-7
<blockquote> <p>How to prove that $$\frac{1000!}{(500!)^2}$$ is not divisible by 7?</p> </blockquote> <p>I reduced the above fraction to: $$\frac{\prod_{k=501}^{1000}k}{500!} $$</p> <p>But I don't know how to proceed. Please help.</p>
[ { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" }, { "answer_id": 2111933, "score": 25, "text": "<p>This is overkill, but by <a href=\"https://en.wikipedia.org/wiki/Lucas&#39;s_theorem\">Lucas's Theorem</a>$$\\frac{1000!}{(500!)^2}=\\binom{1000}{500} \\equiv \\binom{2}{1}\\binom{6}{3}\\binom{2}{1}\\binom{6}{3} \\equiv 4 \\pmod{7}$$\nAs $1000=2626_{7}$ and $500=1313_{7}$. </p>\n\n<p>Here is another , more elementary argument, from <a href=\"https://en.wikipedia.org/wiki/Legendre&#39;s_formula\">Legendre's formula</a>, $$\\nu_7(n!) = \\frac{n - s_7(n)}{6}$$\nSo $\\nu_7(100!)=164, \\nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p>\n" }, { "answer_id": 2111950, "score": 7, "text": "<p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p>\n\n<p>$$\n\\left[\\frac{n}{p}\\right] + \\left[\\frac{n}{p^2}\\right] + \\left[\\frac{n}{p^3}\\right] + \\cdots\n$$</p>\n\n<p>where $[x]$ denotes the greatest integer $≤x$.</p>\n\n<p>In $1000!$ you'd have</p>\n\n<p>$$\n\\left[\\frac{1000}{7}\\right] + \\left[\\frac{1000}{7^2}\\right] + \\left[\\frac{1000}{7^3}\\right] + \\cdots = 164\n$$</p>\n\n<p>$7$'s, and in $500!$ you have</p>\n\n<p>$$\n\\left[\\frac{500}{7}\\right] + \\left[\\frac{500}{7^2}\\right] + \\left[\\frac{500}{7^3}\\right] + \\cdots = 82\n$$</p>\n\n<p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p>\n\n<p>The rest follows.</p>\n" }, { "answer_id": 2111942, "score": 5, "text": "<p>You can also overkill it by Kummer's theorem:</p>\n\n<p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p>\n\n<p>Therefore $\\binom{1000}{500}$ is not a multiple of $7$.</p>\n\n<hr>\n\n<p>In general we have that a prime $p$ divides $\\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p>\n" }, { "answer_id": 2111938, "score": 4, "text": "<p>Consider prime factorisations:</p>\n\n<p>$$\\lfloor\\frac{1000}{7}\\rfloor+\\lfloor\\frac{1000}{7^2}\\rfloor+\\lfloor\\frac{1000}{7^3}\\rfloor+...=142+20+2=164$$\nThen $$7^{164}|1000!$$</p>\n\n<p>Similarly:</p>\n\n<p>$$\\lfloor\\frac{500}{7}\\rfloor+\\lfloor\\frac{500}{7^2}\\rfloor+\\lfloor\\frac{500}{7^3}\\rfloor+...=71+10+1=82$$\nThen $$7^{164}=7^{82\\cdot2}=(7^{82})^2|(500!)^2$$</p>\n" }, { "answer_id": 2111945, "score": 2, "text": "<p>In $\\{1,\\dots, 500\\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br>\nHow many multiples of $7$, $7^2$, $7^3$ do you have in $\\{501,\\dots, 1000\\}$? Same amount.</p>\n" } ]
Question: How to prove that $\frac{1000!}{(500!)^2}$ is not divisible by 7? Answer 1: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 2: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 3: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 4: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 5: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 6: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 7: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 8: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 9: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 10: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 11: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 12: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 13: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 14: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 15: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 16: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 17: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 18: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 19: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 20: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 21: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 22: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 23: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 24: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 25: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 26: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 27: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 28: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 29: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 30: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 31: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 32: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 33: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 34: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 35: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 36: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 37: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 38: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 39: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 40: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 41: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 42: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 43: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 44: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 45: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 46: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 47: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 48: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 49: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 50: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 51: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 52: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 53: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 54: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 55: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 56: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 57: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 58: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 59: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 60: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 61: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 62: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 63: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 64: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 65: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 66: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 67: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 68: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 69: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 70: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 71: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 72: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 73: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 74: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 75: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p> Answer 76: <p>This is overkill, but by <a href="https://en.wikipedia.org/wiki/Lucas&#39;s_theorem">Lucas's Theorem</a>$$\frac{1000!}{(500!)^2}=\binom{1000}{500} \equiv \binom{2}{1}\binom{6}{3}\binom{2}{1}\binom{6}{3} \equiv 4 \pmod{7}$$ As $1000=2626_{7}$ and $500=1313_{7}$. </p> <p>Here is another , more elementary argument, from <a href="https://en.wikipedia.org/wiki/Legendre&#39;s_formula">Legendre's formula</a>, $$\nu_7(n!) = \frac{n - s_7(n)}{6}$$ So $\nu_7(100!)=164, \nu_7 (50!)=82$. So the $7$ s in the denominator and the numerator cancel out. So we have the desired result. </p> Answer 77: <p>It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by</p> <p>$$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$</p> <p>where $[x]$ denotes the greatest integer $≤x$.</p> <p>In $1000!$ you'd have</p> <p>$$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$</p> <p>$7$'s, and in $500!$ you have</p> <p>$$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$</p> <p>$7$'s. So you'll have a total of $164$ 7's in $(500!)^2$.</p> <p>The rest follows.</p> Answer 78: <p>You can also overkill it by Kummer's theorem:</p> <p>Notice that $500$ is $1313$ in base $7$. Since every digit is smaller than $\frac{7}{2}$ there are no carries when adding $1313+1313$ in base $7$.</p> <p>Therefore $\binom{1000}{500}$ is not a multiple of $7$.</p> <hr> <p>In general we have that a prime $p$ divides $\binom{2n}{n}$ if and only if $n$ has at least one digit larger than or equal to $p/2$ in its base $p$ expansion.</p> Answer 79: <p>Consider prime factorisations:</p> <p>$$\lfloor\frac{1000}{7}\rfloor+\lfloor\frac{1000}{7^2}\rfloor+\lfloor\frac{1000}{7^3}\rfloor+...=142+20+2=164$$ Then $$7^{164}|1000!$$</p> <p>Similarly:</p> <p>$$\lfloor\frac{500}{7}\rfloor+\lfloor\frac{500}{7^2}\rfloor+\lfloor\frac{500}{7^3}\rfloor+...=71+10+1=82$$ Then $$7^{164}=7^{82\cdot2}=(7^{82})^2|(500!)^2$$</p> Answer 80: <p>In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$.<br> How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.</p>
Difficult exponential equation $6^x-3^x-2^x-1=0$
2,112,387
8
https://math.stackexchange.com/questions/2112387/difficult-exponential-equation-6x-3x-2x-1-0
<p>$$6^x-3^x-2^x-1=0$$I don't have any idea how to solve this equation. I don't know how to show that the function $f(x)=6^x-3^x-2^x-1$ is strictly increasing (if it's the case). The derivative doesn't show me anything. I tried also to use the Lagrange theorem(like in this equation for example: $2^x+9^x=5^x+6^x$), but it's not the case! But I need a solution without calculus. Please help! Edit: In my country the theorem that states roughly this $f'(c)=\dfrac{f(b)-f(a)}{b-a}, $where $f:[a,b]\to\mathbb{R} $ is called Lagrange theorem</p>
[ { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" }, { "answer_id": 2112394, "score": 20, "text": "<p>$6^x-3^x-2^x-1=0$ equivalent $1=\\frac 1 {2^x}+\\frac 1 {3^x}+\\frac 1 {6^x}$</p>\n\n<p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p>\n" }, { "answer_id": 2112638, "score": 2, "text": "<p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this\n$$ f(x) = g(x) $$\nwhere $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p>\n\n<p><a href=\"https://i.stack.imgur.com/vGJXs.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/vGJXs.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\\%$ sure </p>\n\n<p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p>\n\n<p>where $f(x)$ and $g(x)$ are defined as above. </p>\n" } ]
Question: Difficult exponential equation $6^x-3^x-2^x-1=0$ Answer 1: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 2: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 3: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 4: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 5: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 6: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 7: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 8: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 9: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 10: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 11: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 12: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 13: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 14: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 15: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 16: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 17: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 18: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 19: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 20: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 21: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 22: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 23: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 24: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 25: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 26: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 27: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 28: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 29: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 30: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p> Answer 31: <p>$6^x-3^x-2^x-1=0$ equivalent $1=\frac 1 {2^x}+\frac 1 {3^x}+\frac 1 {6^x}$</p> <p>The left side is constant while the right side is decreasing. There can only be a solution at most, and $x=1$ is perfect for the job.</p> Answer 32: <p>I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below.</p> <p><a href="https://i.stack.imgur.com/vGJXs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vGJXs.jpg" alt="enter image description here"></a></p> <p>Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure </p> <p><code>fsolve({f(x) = g(x)}, {x}, x = -1 .. 2)</code></p> <p>where $f(x)$ and $g(x)$ are defined as above. </p>
Vector spaces - Proving that intersection is distributive over summation of vector spaces
2,113,973
8
https://math.stackexchange.com/questions/2113973/vector-spaces-proving-that-intersection-is-distributive-over-summation-of-vect
<p><strong>The problem</strong></p> <p>Define the <em>sum</em> of two vector spaces (often two subspaces of a common vector space) $A$ and $B$ as $A+B=\{a+b: a \in A, b \in B \}.$ Let $U,V,W$ be arbitrary vector spaces. I want to show that</p> <blockquote> <p>$$U \cap (V+W)=(U \cap V)+(U \cap W)$$</p> </blockquote> <p><strong>My approach</strong></p> <p>By starting with $x \in (U \cap V)+(U \cap W)$ and applying the properties of vector spaces (closure under <em>vector addition</em>) to end up with $x \in U \cap (V+W)$, I've shown that $(U \cap V)+(U \cap W) \subset U \cap (V+W)$.</p> <p>My problem is with the reverse, i.e. showing that $U \cap (V+W) \subset (U \cap V)+(U \cap W)$. Following the usual procedure, I started with:</p> <p>$x \in U \cap (V+W)$, which implies that $x \in U \land x \in V+W$.</p> <p>Now, $x \in V+W \implies x=y+z$ for some $y \in V$ and some $z \in W$.</p> <p>Again, $x=y+z \in U$. If we could show that $y$ and $z$ both comes from $U$, then we will be done. But unfortunately $y+z \in U$ doesn't necessarily imply that $y \in U$ and $z \in U.$ Am I missing some very basic properties of <em>vector spaces</em> that would make this trivial? Any help would be greatly appreciated.</p> <p><strong>P.S.</strong> Just a dumb question. Does the result break down if we replace <em>vector spaces</em> by <em>arbitrary sets</em> in the above problem?</p>
[ { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" }, { "answer_id": 2114039, "score": 13, "text": "<p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\\neq V$, $V\\neq W$ and $W\\neq U$) then $U\\cap (V+W) = U$ but $(U\\cap V)+(U\\cap W)$ = $\\{0\\}+\\{0\\} =\\{0\\}$.</p>\n" }, { "answer_id": 3718784, "score": 1, "text": "<p>This is from page 2 of one of <a href=\"http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf\" rel=\"nofollow noreferrer\">Hochster's notes</a>:</p>\n<p>Consider a <span class=\"math-container\">$K$</span>-vector space with basis <span class=\"math-container\">$\\mathcal B$</span>, and let <span class=\"math-container\">$\\mathcal S$</span> be the set of <span class=\"math-container\">$K$</span>-vector subspaces that are spanned by a subset of <span class=\"math-container\">$\\mathcal B$</span>. Then there is an order-preserving bijection between <span class=\"math-container\">$\\mathcal S$</span> and the set of subsets of <span class=\"math-container\">$\\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p>\n<p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>\n" } ]
Question: Vector spaces - Proving that intersection is distributive over summation of vector spaces Answer 1: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 2: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 3: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 4: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 5: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 6: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 7: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 8: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 9: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 10: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 11: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 12: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 13: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 14: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 15: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 16: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 17: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 18: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 19: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 20: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 21: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 22: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 23: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 24: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 25: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 26: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 27: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 28: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 29: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 30: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p> Answer 31: <p>The statement is wrong. If $U$, $V$ and $W$ are all lines in the same plane ($U\neq V$, $V\neq W$ and $W\neq U$) then $U\cap (V+W) = U$ but $(U\cap V)+(U\cap W)$ = $\{0\}+\{0\} =\{0\}$.</p> Answer 32: <p>This is from page 2 of one of <a href="http://www.math.lsa.umich.edu/%7Ehochster/615W07/L01.08.pdf" rel="nofollow noreferrer">Hochster's notes</a>:</p> <p>Consider a <span class="math-container">$K$</span>-vector space with basis <span class="math-container">$\mathcal B$</span>, and let <span class="math-container">$\mathcal S$</span> be the set of <span class="math-container">$K$</span>-vector subspaces that are spanned by a subset of <span class="math-container">$\mathcal B$</span>. Then there is an order-preserving bijection between <span class="math-container">$\mathcal S$</span> and the set of subsets of <span class="math-container">$\mathcal B$</span>. This bijection preserves intersection, even infinite intersection, and takes sums, even infinite sums, to unions. Thus, for such a family of vector spaces, intersection distributes over sum (even when the sum is infinite) and union distributes over intersection (even if the intersection is infinite).</p> <p>(<strong>NOTE</strong>: There is an error in the final remark of the last sentence. It should say &quot;sum distributes over intersection&quot;.)</p>
Computing the Fourier transform of $H_k(x)e^{-x^2/2}$, where $H_k$ is the Hermite polynomial.
2,114,701
8
https://math.stackexchange.com/questions/2114701/computing-the-fourier-transform-of-h-kxe-x2-2-where-h-k-is-the-hermit
<p>[<strong>Notations</strong>] The definition of Fourier transform of a $L^1$ function $f$ is given by the formula $\int f(x)e^{-ix\cdot\xi}dx$, with no normalizing factors; similarly for the Fourier-Plancherel transform on $L^2$. The Hermite polynomial of order $k$ is given by $H_k(x)=(-1)^ke^{x^2}\left(\frac{d}{dx}\right)^k(e^{-x^2})$. This is "physicists' Hermite polynomials", I believe. I will use $\mathscr{F}$ to denote the Fourier transform.</p> <hr> <p>I'm trying to understand the proof of the following theorem.</p> <blockquote> <p>For any $k=0,1,2,\ldots,$ $$\mathscr{F}(H_k(x)e^{-x^2/2})=\sqrt{2\pi}(-i)^kH_k(\xi)e^{-\xi^2/2}.$$</p> </blockquote> <p>The proof starts by showing that for any fixed $t\in\mathbb{C}$, the series $\sum H_k(x)e^{-x^2/2}\frac{t^k}{k!}$ is (absolutely) convergent in $L^2$ with its limit being $e^{-x^2/2+2xt-t^2}$. This requires the generating-function form of the Hermite polynomials, namely $\sum H_k(x)\frac{t^k}{k!}=e^{2xt-t^2}$, and some orthogonality relations. Then, by continuity of $\mathscr{F}$ on $L^2$, we obtain for any fixed $t\in\mathbb{C}$, $$\sum\mathscr{F}(H_k(x)e^{-x^2/2})\frac{t^k}{k!}=\mathscr{F}(e^{-x^2+2xt-t^2})=\sqrt{2\pi}e^{-\xi^2/2}e^{t^2-2it\xi}$$ and by writing $e^{t^2-2it\xi}=e^{-(-it)^2+2(-it)\xi}$, we get $$\sqrt{2\pi}e^{-\xi^2/2}e^{t^2-2it\xi}=\sum \sqrt{2\pi}(-i)^kH_k(\xi)e^{-\xi^2/2}\frac{t^k}{k!}$$ with convergence in $L^2$. Thus we have something like </p> <blockquote> <p>For all $t\in\mathbb{C}$, $$\sum\mathscr{F}(H_k(x)e^{-x^2/2})\frac{t^k}{k!}=\sum \sqrt{2\pi}(-i)^kH_k(\xi)e^{-\xi^2/2}\frac{t^k}{k!}.$$</p> </blockquote> <hr> <p>[<strong>Question</strong>] Can we conclude from here that $\mathscr{F}(H_k(x)e^{-x^2/2})=\sqrt{2\pi}(-i)^kH_k(\xi)e^{-\xi^2/2}$ ?</p> <p>My book just says "equating the coefficients of $t^k$ yields the result", but the situation here differs from the typical ones in complex analysis, since the 'coefficients' are elements of $L^2$, not just numbers. I want to say something like "Plugging in $t=0$ gives the result for $k=0$. Differentiating once and plugging in $t=0$ gives the result for $k=1$, and so on.", but I'm not familiar with derivatives of a $L^2$ valued function, and I don't think that is a valid argument.</p> <p>How can we conclude $\mathscr{F}(H_k(x)e^{-x^2/2})=\sqrt{2\pi}(-i)^kH_k(\xi)e^{-\xi^2/2}$ from the above power-series looking equation in $L^2$? Please help me!</p>
[ { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" }, { "answer_id": 2115474, "score": 6, "text": "<p>Yes, once you have the equation\n$$\n \\sum_{n}\\mathscr{F}(e^{-x^2/2}H_n(x))\\frac{t^n}{n!}\n = \\sum_{n}\\sqrt{2\\pi}(-i)^n H_n(\\xi)e^{-\\xi^2/2}\\frac{t^n}{n!},\n$$\nthen you can fix $\\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\\xi$, the power series coefficients must be identical, which leads to\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))=\\sqrt{2\\pi}(-i)^n H_n(\\xi)\n$$\nThe left side implicitly depends on $\\xi$, because it is the transform variable. The transform is continuous in $\\xi$ because of the exponentially decaying nature of the function being transformed. So\n$$\n \\mathscr{F}(e^{-x^2}H_n(x))(\\xi)=\\sqrt{2\\pi}(-i)^n H_n(\\xi),\\;\\;\\xi\\in\\mathbb{R}.\n$$\nOmitting the arguments of the functions, these functions are equal:\n$$ \\mathscr{F}(e^{-x^2}H_n(x)) = \\sqrt{2\\pi}(-i)^n H_n\n$$</p>\n" }, { "answer_id": 2140023, "score": 1, "text": "<p>Once you know that the series in question converges absolutely in $L^2$ \nthen the identification becomes automatic.\nWriting $\\psi_k(x) = \\frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\\geq 0$\nthen each\n$\\psi_k$ is an element in the Hilbert space $L^2({\\Bbb R})$ \nhaving norm\n$\\frac{2^{k/2} } {\\sqrt{k!}} \\pi^{1/4}$.\nOne has $\\lim_k \\|\\psi_k\\|_{L^2}^{1/k} = 0 $, so\nthe series\n$$ \\Psi_t(x) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\psi_k(x)$$\nis norm convergent in $L^2$ for all complex $t$.\n$\\Psi_t$ is therefore an analytic $L^2$ valued function of $t\\in {\\Bbb C}$.\nCoefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention.\nSince Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm \n we automatically get:\n $$ \\widehat{\\Psi}_t(\\xi) = \n\\lim_{N\\rightarrow \\infty}\n \\sum_{k=0}^{N-1} t^k \\widehat{\\psi}_k(\\xi),$$\nagain an analytic $L^2$ valued function.\nAn issue is how to calculate the two sums (as $L^2$ limits) but\nas you say, in your book they prove that\n$\\Psi_t(x)$ (in $L^2$) is identical to \n$$ \\Phi_t= \n\\exp \\left( 2xt-t^2-x^2/2 \\right) $$\nThis then resolves the problem since as elements of $L^2$:\n$$\\widehat{\\Psi}_t(\\xi)=\n\\widehat{\\Phi}_t(\\xi) = \\exp \\left( -2it\\xi+t^2-\\xi^2/2 \\right)\n = \\Phi_{-it}(\\xi) =\\Psi_{-it}(\\xi)$$\nIdentifying the analytic expansions for \n$\\widehat{\\Psi}_t(\\xi)=\\Psi_{-it}(\\xi)$\nyou must have:\n$\\widehat{\\psi_k}(\\xi) = (-i)^k \\psi_k(\\xi)$ in $L^2$ (where possibly a factor of $\\sqrt{2\\pi}$ should be included).\nAs both are actually smooth functions, you get the wanted identity.</p>\n\n<p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation,\n$H_k(x)=(-1)^k e^{x^2} \\partial_x^k e^{-x^2}$ shows that $\\psi_k=\\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\\Phi_t$\nas a Cauchy integral as follows:\n$$ \\psi_k(x)= \n (-1)^k e^{x^2/2} \\oint_{\\partial B(x,R)}\n \\frac{e^{-z^2}}{(z-x)^{k+1}} \\frac{dz}{2\\pi i} = \n \\oint_{\\partial B(0,R)}\n \\frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \\frac{du}{2\\pi i}$$\nwhere $R&gt;0$ is arbitrary. From this we get the bound $|\\psi_k(x)| \\leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable\n(in $x$) so that\n$\\psi_k$ is an element in $L^2$.</p>\n\n<p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series:\n $$\n \\sum_{k=0}^{N-1} t^k \\psi_k(x) = \n \\oint_{\\partial B(0,R)}\n (1 - (t/u)^N)\\frac{e^{-x^2/2+2xu-u^2}}{u-t} \\frac{du}{2\\pi i}=\n\\Phi_t(x) + {\\cal R}^{N}(t,x)$$\nwhere we have the following bound for the remainder:\n $$ |{\\cal R}^{N}(t,x)| \\leq \\frac{(|t|/R)^N}{R-|t|} \n e^{-x^2/2+2R|x|+R^2}.$$\nThis is indeed a function in $L^2({\\Bbb R})$ and goes to zero in norm when\n $N\\rightarrow \\infty$, thus justifying the above identifications.</p>\n" } ]
Question: Computing the Fourier transform of $H_k(x)e^{-x^2/2}$, where $H_k$ is the Hermite polynomial. Answer 1: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 2: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 3: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 4: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 5: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 6: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 7: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 8: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 9: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 10: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 11: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 12: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 13: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 14: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 15: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 16: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 17: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 18: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 19: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 20: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 21: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 22: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 23: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 24: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 25: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 26: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 27: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 28: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 29: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 30: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p> Answer 31: <p>Yes, once you have the equation $$ \sum_{n}\mathscr{F}(e^{-x^2/2}H_n(x))\frac{t^n}{n!} = \sum_{n}\sqrt{2\pi}(-i)^n H_n(\xi)e^{-\xi^2/2}\frac{t^n}{n!}, $$ then you can fix $\xi$, and view this as a power series equation that holds for all $t$. Therefore, for this fixed $\xi$, the power series coefficients must be identical, which leads to $$ \mathscr{F}(e^{-x^2}H_n(x))=\sqrt{2\pi}(-i)^n H_n(\xi) $$ The left side implicitly depends on $\xi$, because it is the transform variable. The transform is continuous in $\xi$ because of the exponentially decaying nature of the function being transformed. So $$ \mathscr{F}(e^{-x^2}H_n(x))(\xi)=\sqrt{2\pi}(-i)^n H_n(\xi),\;\;\xi\in\mathbb{R}. $$ Omitting the arguments of the functions, these functions are equal: $$ \mathscr{F}(e^{-x^2}H_n(x)) = \sqrt{2\pi}(-i)^n H_n $$</p> Answer 32: <p>Once you know that the series in question converges absolutely in $L^2$ then the identification becomes automatic. Writing $\psi_k(x) = \frac{1}{k!} H_k(x) e^{-x^2/2}$, $k\geq 0$ then each $\psi_k$ is an element in the Hilbert space $L^2({\Bbb R})$ having norm $\frac{2^{k/2} } {\sqrt{k!}} \pi^{1/4}$. One has $\lim_k \|\psi_k\|_{L^2}^{1/k} = 0 $, so the series $$ \Psi_t(x) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \psi_k(x)$$ is norm convergent in $L^2$ for all complex $t$. $\Psi_t$ is therefore an analytic $L^2$ valued function of $t\in {\Bbb C}$. Coefficients in an analytic series are uniquely determined by the function and may be recovered e.g. by a Cauchy integral (which works fine in Hilbert or Banach spaces). You may also recover them simply from looking at successive derivatives at $0$, as you mention. Since Fourier transformation (denoted by a 'hat') preserves the $L^2$ norm we automatically get: $$ \widehat{\Psi}_t(\xi) = \lim_{N\rightarrow \infty} \sum_{k=0}^{N-1} t^k \widehat{\psi}_k(\xi),$$ again an analytic $L^2$ valued function. An issue is how to calculate the two sums (as $L^2$ limits) but as you say, in your book they prove that $\Psi_t(x)$ (in $L^2$) is identical to $$ \Phi_t= \exp \left( 2xt-t^2-x^2/2 \right) $$ This then resolves the problem since as elements of $L^2$: $$\widehat{\Psi}_t(\xi)= \widehat{\Phi}_t(\xi) = \exp \left( -2it\xi+t^2-\xi^2/2 \right) = \Phi_{-it}(\xi) =\Psi_{-it}(\xi)$$ Identifying the analytic expansions for $\widehat{\Psi}_t(\xi)=\Psi_{-it}(\xi)$ you must have: $\widehat{\psi_k}(\xi) = (-i)^k \psi_k(\xi)$ in $L^2$ (where possibly a factor of $\sqrt{2\pi}$ should be included). As both are actually smooth functions, you get the wanted identity.</p> <p>Returning to the limit in $L^2$, let me provide a relatively simple proof: The defining equation, $H_k(x)=(-1)^k e^{x^2} \partial_x^k e^{-x^2}$ shows that $\psi_k=\frac{1}{k!}H_k(x) e^{-x^2/2}$ itself may be expressed through $\Phi_t$ as a Cauchy integral as follows: $$ \psi_k(x)= (-1)^k e^{x^2/2} \oint_{\partial B(x,R)} \frac{e^{-z^2}}{(z-x)^{k+1}} \frac{dz}{2\pi i} = \oint_{\partial B(0,R)} \frac{e^{-x^2/2+2xu-u^2}}{u^{k+1}} \frac{du}{2\pi i}$$ where $R&gt;0$ is arbitrary. From this we get the bound $|\psi_k(x)| \leq R^{-(k+1)} e^{-x^2/2+2|x|R +R^2}$ which is clearly square integrable (in $x$) so that $\psi_k$ is an element in $L^2$.</p> <p>Now fix a value of $t$ and let $R&gt;|t|$. Then by summing a geometric series: $$ \sum_{k=0}^{N-1} t^k \psi_k(x) = \oint_{\partial B(0,R)} (1 - (t/u)^N)\frac{e^{-x^2/2+2xu-u^2}}{u-t} \frac{du}{2\pi i}= \Phi_t(x) + {\cal R}^{N}(t,x)$$ where we have the following bound for the remainder: $$ |{\cal R}^{N}(t,x)| \leq \frac{(|t|/R)^N}{R-|t|} e^{-x^2/2+2R|x|+R^2}.$$ This is indeed a function in $L^2({\Bbb R})$ and goes to zero in norm when $N\rightarrow \infty$, thus justifying the above identifications.</p>
Standard inner product of matrices
2,115,408
8
https://math.stackexchange.com/questions/2115408/standard-inner-product-of-matrices
<p>What is the correct definition of the <strong>standard</strong> inner product of two given matrices?</p>
[ { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" }, { "answer_id": 2115413, "score": 7, "text": "<p>According to <a href=\"https://en.wikipedia.org/wiki/Inner_product_space\" rel=\"noreferrer\">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\\langle A,B\\rangle=tr(AB^t)$. The properties are also proved <a href=\"https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product\">here</a>.</p>\n" }, { "answer_id": 2115452, "score": 4, "text": "<p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p>\n\n<p>$$\\left&lt;A,B\\right&gt;=\\sum_{1\\le i,j\\le n}a_{i,j}b_{i,j}$$</p>\n\n<p>which clearly reminds us the way we calculate a (standard) inner product in $\\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>\n" } ]
Question: Standard inner product of matrices Answer 1: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 2: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 3: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 4: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 5: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 6: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 7: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 8: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 9: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 10: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 11: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 12: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 13: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 14: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 15: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 16: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 17: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 18: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 19: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 20: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 21: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 22: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 23: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 24: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 25: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 26: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 27: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 28: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 29: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 30: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p> Answer 31: <p>According to <a href="https://en.wikipedia.org/wiki/Inner_product_space" rel="noreferrer">wikipedia</a> the standard matrix inner product on <em>square</em> matrices is defined as $\langle A,B\rangle=tr(AB^t)$. The properties are also proved <a href="https://math.stackexchange.com/questions/1224230/proving-a-matrix-inner-product">here</a>.</p> Answer 32: <p>It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is :</p> <p>$$\left&lt;A,B\right&gt;=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$</p> <p>which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.</p>
What is the Arctangent of Tangent?
2,116,622
8
https://math.stackexchange.com/questions/2116622/what-is-the-arctangent-of-tangent
<p>This is probably just me misunderstanding trig properties. I remember from my trig days that $\tan(\arctan(x)) = x$ But I can't remember if that holds the other way. Can someone help me out? Is this valid:</p> <p>$$\arctan(\tan(\theta)) = \theta$$</p>
[ { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" }, { "answer_id": 2116661, "score": 14, "text": "<p>It is true that $\\tan\\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p>\n\n<p>Recall that $\\arctan x$ returns a number (an angle if you prefer) in the interval $(-\\pi/2,\\pi/2)$. So we have the equality\n$$\n\\arctan(\\tan\\theta)=\\theta\n$$\nif and only if $\\theta\\in(-\\pi/2,\\pi/2)$.</p>\n\n<p>A formula can be given for any $\\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\\pi$ so that\n$$\n-\\frac{\\pi}{2}&lt;\\theta-k\\pi&lt;\\frac{\\pi}{2}\n$$\nwhich is equivalent to\n$$\n-\\frac{1}{2}&lt;\\frac{\\theta}{\\pi}-k&lt;\\frac{1}{2}\n$$\nor\n$$\n0&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}-k&lt;1\n$$\nor\n$$\nk&lt;\\frac{\\theta}{\\pi}+\\frac{1}{2}&lt;k+1\n$$\nwhich means\n$$\nk=\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\nThus\n$$\n\\arctan(\\tan\\theta)=\\theta-\\pi\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor\n$$\n(of course when $\\tan\\theta$ is defined to begin with).</p>\n\n<p>For instance, if $\\theta=15\\pi/4$, we have\n$$\n\\left\\lfloor\\frac{\\theta}{\\pi}+\\frac{1}{2}\\right\\rfloor=\n\\left\\lfloor\\frac{15}{4}+\\frac{1}{2}\\right\\rfloor=4\n$$\nand\n$$\n\\arctan\\tan\\frac{15\\pi}{4}=\\frac{15\\pi}{4}-4\\pi=-\\frac{\\pi}{4}\n$$</p>\n" }, { "answer_id": 2116624, "score": 3, "text": "<p>Yes, if $-\\pi/2 &lt; \\theta &lt; \\pi/2$.</p>\n\n<p>No, otherwise.</p>\n" }, { "answer_id": 2116642, "score": 3, "text": "<p>It only equals $\\theta $ if $$-\\frac {\\pi}{2}&lt;\\theta &lt; \\frac {\\pi}{2} $$ as the range of $\\arctan$ is only from $-\\frac {\\pi}{2} $ to $\\frac {\\pi}{2} $. If $\\theta $ is outside this interval, then you would need to add or subtract $\\pi $ from $\\theta $ until you get to the angle in this interval that has the same value of $\\tan$.</p>\n\n<p>For instance, $\\arctan (\\tan \\frac {\\pi}{6}) = \\frac {\\pi}{6} $, but $\\arctan (\\tan \\frac {3\\pi }{4}) = -\\frac {\\pi}{4} $. </p>\n\n<p>This can also been seen here: </p>\n\n<p><img src=\"https://i.stack.imgur.com/f2LZQ.png\" alt=\"enter image description here\"> Hope it helps.</p>\n" }, { "answer_id": 2116647, "score": 1, "text": "<p>All you can say <em>a priori</em> is\n$$\\arctan(\\tan\\theta)\\equiv \\theta\\mod\\pi.$$</p>\n" } ]
Question: What is the Arctangent of Tangent? Answer 1: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 2: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 3: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 4: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 5: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 6: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 7: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 8: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 9: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 10: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 11: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 12: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 13: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 14: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 15: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 16: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 17: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 18: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 19: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 20: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 21: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 22: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 23: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 24: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 25: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 26: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 27: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 28: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 29: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 30: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 31: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 32: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 33: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 34: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 35: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 36: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 37: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 38: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 39: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 40: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 41: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 42: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 43: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 44: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 45: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 46: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 47: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 48: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 49: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 50: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 51: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 52: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 53: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 54: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 55: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 56: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 57: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 58: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 59: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 60: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p> Answer 61: <p>It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function.</p> <p>Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$.</p> <p>A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}&lt;\theta-k\pi&lt;\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}&lt;\frac{\theta}{\pi}-k&lt;\frac{1}{2} $$ or $$ 0&lt;\frac{\theta}{\pi}+\frac{1}{2}-k&lt;1 $$ or $$ k&lt;\frac{\theta}{\pi}+\frac{1}{2}&lt;k+1 $$ which means $$ k=\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ Thus $$ \arctan(\tan\theta)=\theta-\pi\left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor $$ (of course when $\tan\theta$ is defined to begin with).</p> <p>For instance, if $\theta=15\pi/4$, we have $$ \left\lfloor\frac{\theta}{\pi}+\frac{1}{2}\right\rfloor= \left\lfloor\frac{15}{4}+\frac{1}{2}\right\rfloor=4 $$ and $$ \arctan\tan\frac{15\pi}{4}=\frac{15\pi}{4}-4\pi=-\frac{\pi}{4} $$</p> Answer 62: <p>Yes, if $-\pi/2 &lt; \theta &lt; \pi/2$.</p> <p>No, otherwise.</p> Answer 63: <p>It only equals $\theta $ if $$-\frac {\pi}{2}&lt;\theta &lt; \frac {\pi}{2} $$ as the range of $\arctan$ is only from $-\frac {\pi}{2} $ to $\frac {\pi}{2} $. If $\theta $ is outside this interval, then you would need to add or subtract $\pi $ from $\theta $ until you get to the angle in this interval that has the same value of $\tan$.</p> <p>For instance, $\arctan (\tan \frac {\pi}{6}) = \frac {\pi}{6} $, but $\arctan (\tan \frac {3\pi }{4}) = -\frac {\pi}{4} $. </p> <p>This can also been seen here: </p> <p><img src="https://i.stack.imgur.com/f2LZQ.png" alt="enter image description here"> Hope it helps.</p> Answer 64: <p>All you can say <em>a priori</em> is $$\arctan(\tan\theta)\equiv \theta\mod\pi.$$</p>
Set of limit points is closed in a general topological space
2,116,855
8
https://math.stackexchange.com/questions/2116855/set-of-limit-points-is-closed-in-a-general-topological-space
<p>Let $(X, \mathcal{T})$ be a topological space. I came across a question asking for the proof of the fact that the set of limit points $S'$ of any subset $S \subseteq X$ is closed assuming the space is Hausdorff. However, is this always true for a general topological space that is not necessarily Hausdorff?</p> <p>I am expecting the answer is no. To find a counter example, I tried to use the topology $\mathcal{T} = \{\emptyset, \{1\},\{1,2\},\{1,2,3\}\}$ on $X = \{1,2,3\}$, however I have not been able to find an example of a set whose set of limits points is not closed in this case.</p> <p>I would appreciate a hint to find a counter example/a proof that none exists rather than a full solution. Thank you very much.</p>
[ { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" }, { "answer_id": 2116863, "score": 6, "text": "<p>Try any set with cardinality at least <span class=\"math-container\">$2$</span>, equipped with the indiscrete topology.</p>\n<p>In detail, let <span class=\"math-container\">$X=\\{a,b\\}$</span> with topology <span class=\"math-container\">$\\tau=\\{\\emptyset,X\\}$</span>. If we take <span class=\"math-container\">$A=\\{a\\}$</span>, then <span class=\"math-container\">$b$</span> is a limit point of <span class=\"math-container\">$A$</span>. But <span class=\"math-container\">$a$</span> is not a limit point of <span class=\"math-container\">$A$</span> since <span class=\"math-container\">$a$</span> is an isolated point in <span class=\"math-container\">$A$</span>. (A nbhd of <span class=\"math-container\">$a$</span> cannot contain a point of <span class=\"math-container\">$A$</span> different from <span class=\"math-container\">$a$</span>, as there are no other points.) This shows that the derived set is <span class=\"math-container\">$A'=\\{b\\}$</span>, which is not closed in <span class=\"math-container\">$X$</span>.</p>\n" }, { "answer_id": 2117002, "score": 5, "text": "<p>If $X$ is a $T_1$ space then $S'$ is closed. </p>\n\n<p>Suppose by contradiction that $p\\in \\overline {S'}$ \\ $S'.$ Since $S'\\subset \\bar S$ we have $p\\in \\bar S.$</p>\n\n<p>Now $p\\not \\in S'$ implies there is an open $U$ with $p\\in U$ and $U\\cap S\\subset \\{p\\}.$</p>\n\n<p>But $U$ must contain $q\\in S'$ because $p\\in \\overline {S'}$, and by hypothesis, $p\\ne q.$ Then there is an open $V$ with $q\\in V$, and $p \\not \\in V$, because $X$ is $T_1.$ The open set $U\\cap V$ contains $q\\in S'$, so there exists $r\\in U\\cap V\\cap S$ with $r\\ne q.$ BUT then $p\\ne r\\in S\\cap U,$ contrary to $U\\cap S \\subset \\{p\\}$.</p>\n\n<p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\\{p\\}\\subset X$ we have $S'=X$ \\ $\\{p\\}$ and $\\overline {S'}=X\\ne S'.$</p>\n" }, { "answer_id": 2428105, "score": 3, "text": "<p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\\{a,b,c\\}$ and $\\tau=\\{\\emptyset, \\{a,b\\},X\\}$. If we take $A=\\{a\\}$ then $A'=\\{b,c\\}$ which is not closed since its complement $\\{a\\}$ is not in $\\tau$. </p>\n" } ]
Question: Set of limit points is closed in a general topological space Answer 1: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 2: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 3: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 4: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 5: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 6: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 7: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 8: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 9: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 10: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 11: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 12: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 13: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 14: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 15: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 16: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 17: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 18: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 19: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 20: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 21: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 22: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 23: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 24: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 25: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 26: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 27: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 28: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 29: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 30: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 31: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 32: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 33: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 34: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 35: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 36: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 37: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 38: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 39: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 40: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 41: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 42: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 43: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 44: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 45: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p> Answer 46: <p>Try any set with cardinality at least <span class="math-container">$2$</span>, equipped with the indiscrete topology.</p> <p>In detail, let <span class="math-container">$X=\{a,b\}$</span> with topology <span class="math-container">$\tau=\{\emptyset,X\}$</span>. If we take <span class="math-container">$A=\{a\}$</span>, then <span class="math-container">$b$</span> is a limit point of <span class="math-container">$A$</span>. But <span class="math-container">$a$</span> is not a limit point of <span class="math-container">$A$</span> since <span class="math-container">$a$</span> is an isolated point in <span class="math-container">$A$</span>. (A nbhd of <span class="math-container">$a$</span> cannot contain a point of <span class="math-container">$A$</span> different from <span class="math-container">$a$</span>, as there are no other points.) This shows that the derived set is <span class="math-container">$A'=\{b\}$</span>, which is not closed in <span class="math-container">$X$</span>.</p> Answer 47: <p>If $X$ is a $T_1$ space then $S'$ is closed. </p> <p>Suppose by contradiction that $p\in \overline {S'}$ \ $S'.$ Since $S'\subset \bar S$ we have $p\in \bar S.$</p> <p>Now $p\not \in S'$ implies there is an open $U$ with $p\in U$ and $U\cap S\subset \{p\}.$</p> <p>But $U$ must contain $q\in S'$ because $p\in \overline {S'}$, and by hypothesis, $p\ne q.$ Then there is an open $V$ with $q\in V$, and $p \not \in V$, because $X$ is $T_1.$ The open set $U\cap V$ contains $q\in S'$, so there exists $r\in U\cap V\cap S$ with $r\ne q.$ BUT then $p\ne r\in S\cap U,$ contrary to $U\cap S \subset \{p\}$.</p> <p>As Opn Ball has already pointed out, the property does not hold for the coarse topology on $X$ if $X$ has at least 2 points: For $S=\{p\}\subset X$ we have $S'=X$ \ $\{p\}$ and $\overline {S'}=X\ne S'.$</p> Answer 48: <p>If $X$ is not $T_1$, it might not work as the following example shows. $X=\{a,b,c\}$ and $\tau=\{\emptyset, \{a,b\},X\}$. If we take $A=\{a\}$ then $A'=\{b,c\}$ which is not closed since its complement $\{a\}$ is not in $\tau$. </p>
Euler Limit of $\frac{x^x-x}{1-x+\ln(x)}$ Without L&#39;Hopital
2,116,999
8
https://math.stackexchange.com/questions/2116999/euler-limit-of-fracxx-x1-x-lnx-without-lhopital
<p>On slide $10$ here: <a href="http://math.cmu.edu/~bwsulliv/basel-problem.pdf">http://math.cmu.edu/~bwsulliv/basel-problem.pdf</a> , it is stated that Euler may have used L'Hopital's Rule to show that the limit $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=-2$$ I was able to reproduce this easily with L'Hopital's Rule Upon two applications of it (shown at the bottom of this post).</p> <p>I've attempted and failed showing this limit using various methods including substitutions, Squeeze Theorem, splitting it up into $\dfrac{x^x}{1-x+\ln(x)}-\dfrac{x}{1-x+\ln(x)}$ (doesn't seem to work because it's just indeterminate), and series expansion representations (sort of).</p> <p>Substitutions tried and problems with them:</p> <ul> <li>Let $y=\ln(x)\implies x=e^y$ and $x^x=e^{ye^y}$, but I still get $\frac{0}{0}$ and couldn't find a way to transform it into a known limit.</li> <li>Let $y=x^x\implies \ln(y)=x\ln(x)$, but don't know how to solve for $x$ here in terms of $y$.</li> <li>Let $y=e^x\implies \ln(y)=x\implies x^x=(\ln(y))^{(\ln(y))}$. This limit did not seem to be easier.</li> </ul> <p>For Squeeze Theorem, I simply couldn't find suitable comparisons mostly since I could not remove the $\ln(x)$ in those comparisons. Even removing the $x^x$ doesn't seem like it would help since $\dfrac{x^x-x}{1-x+\ln(x)}$ is strictly less than $\dfrac{-x}{1-x+\ln(x)}$. So, even if the new limit went to $-2$, which it doesn't, we wouldn't have strictly shown the original limit to be $-2$. As for series expansions, I have no idea how to get a good series expansion of that (tried Taylor at $x=0,1,2$, but very messy and still gave $\frac{0}{0}$ and didn't give any hints).</p> <p><strong>With L'Hopital</strong> $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}\to\dfrac{0}{0}\implies \lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{x\to 1}\dfrac{x^x(\ln(x)+1)-1}{-1+\frac{1}{x}}=\dfrac{0}{0}$$ $$\implies \lim_{x\to 1}\dfrac{x^x(\ln(x)+1)-1}{-1+\frac{1}{x}}=\lim_{x\to 1}\dfrac{x^{x-1}+x^x(\ln(x)+1)\ln(x)+x^x(\ln(x)+1)}{-\frac{1}{x^2}}$$ $$=-(1^2)\cdot\left[1^{1-1}+1^1(\ln(1)+1)\ln(1)+1^1(\ln(1)+1)\right]=(-1)\cdot\left[1+0+1\right]=\boxed{-2}$$</p> <p>I appreciate any helps or hints on changing this does to a bunch of known/simpler limits.</p>
[ { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" }, { "answer_id": 2117014, "score": 9, "text": "<p>You mention allowing series expansions. This problem is trivial with them.\n$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}$$\nSo that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$\n$$=\\lim_{u\\to 0}\\dfrac{(u+1)^{u+1}-u-1}{\\ln(u+1)-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\\frac{u^2}{2}+O(u^3)]-u}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2+O(u^3)}{-\\frac{u^2}{2}+O(u^3)}$$\n$$=\\lim_{u\\to 0}\\dfrac{u^2}{-\\frac{u^2}{2}}$$\n$$=\\color{red}{-2}$$</p>\n" }, { "answer_id": 2117033, "score": 6, "text": "<p>$y=\\ln(x)$ actually works </p>\n\n<p>$$\\lim_{x\\to 1}\\dfrac{x^x-x}{1-x+\\ln(x)}=\\lim_{y \\to 0}\\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\\lim_{y \\to 0}e^{y}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\\lim_{y \\to 0}\\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p>\n\n<p>Now, by the definition of the derivative\n$$\\lim_{z \\to 0}\\frac{e^z-1}{z}=1$$</p>\n\n<p>Replacing $z=y(e^y-1)$ you get\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p>\n\n<p>Therefore\n$$\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}$$</p>\n\n<p>This last limit is much easier and can be calculated easily, either with power series, or by calculating\n$$\n\\lim_{y \\to 0}\\frac{y(e^y-1)}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y(e^y-1)}{y^2}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{e^y-1}{y}\\frac{y^2}{1+y-e^y}=\\lim_{y \\to 0}\\frac{y^2}{1+y-e^y}\n$$\nwhich is pretty standard.</p>\n" }, { "answer_id": 2117045, "score": 4, "text": "<p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes\n$$\n\\lim_{t\\to0}(1+t)\\frac{(t+1)^t-1}{\\log(1+t)-t}\n$$\n(surely Euler didn't use “ln”).</p>\n\n<p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction.\nThe numerator is $e^{t\\log(t+1)}-1$ and can be expanded like\n$$\n1+t\\log(1+t)+o(t\\log(1+t))-1=t^2+o(t^2)\n$$\nwhereas the denominator is\n$$\n\\log(1+t)-t=t-\\frac{t^2}{2}+o(t^2)-t=-\\frac{t^2}{2}+o(t^2)\n$$\nso the limit is $-2$.</p>\n" }, { "answer_id": 2117100, "score": 0, "text": "<p>This is a footnote about the method used in some of the answers already given.</p>\n\n<p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p>\n\n<p>So $A=\\sup \\{|f'''(x)|:x\\in [1/2,2]\\}&lt;\\infty$ and $B=\\sup \\{|g'''(x)|:x\\in [1/2,2]\\}&lt;\\infty.$ </p>\n\n<p>Let $M=\\max (A,B).$ </p>\n\n<p>For $1+y\\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\\leq |y|.$ So $$|y^3f'''(c)/3!|\\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\\to 0.$ </p>\n\n<p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\\infty.$</p>\n" } ]
Question: Euler Limit of $\frac{x^x-x}{1-x+\ln(x)}$ Without L&#39;Hopital Answer 1: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 2: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 3: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 4: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 5: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 6: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 7: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 8: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 9: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 10: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 11: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 12: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 13: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 14: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 15: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 16: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 17: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 18: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 19: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 20: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 21: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 22: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 23: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 24: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 25: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 26: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 27: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 28: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 29: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 30: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 31: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 32: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 33: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 34: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 35: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 36: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 37: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 38: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 39: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 40: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 41: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 42: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 43: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 44: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 45: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 46: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 47: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 48: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 49: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 50: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 51: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 52: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 53: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 54: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 55: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 56: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 57: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 58: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 59: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 60: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p> Answer 61: <p>You mention allowing series expansions. This problem is trivial with them. $$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}$$ So that I can use McLaurin series I already know, I am going to enforce the substitution $u = x-1$ $$=\lim_{u\to 0}\dfrac{(u+1)^{u+1}-u-1}{\ln(u+1)-u}$$ $$=\lim_{u\to 0}\dfrac{[1+u+u^2+O(u^3)]-u-1}{[u-\frac{u^2}{2}+O(u^3)]-u}$$ $$=\lim_{u\to 0}\dfrac{u^2+O(u^3)}{-\frac{u^2}{2}+O(u^3)}$$ $$=\lim_{u\to 0}\dfrac{u^2}{-\frac{u^2}{2}}$$ $$=\color{red}{-2}$$</p> Answer 62: <p>$y=\ln(x)$ actually works </p> <p>$$\lim_{x\to 1}\dfrac{x^x-x}{1-x+\ln(x)}=\lim_{y \to 0}\dfrac{e^{ye^y}-e^y}{1-e^{y}+y}=\lim_{y \to 0}e^{y}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}=\lim_{y \to 0}\dfrac{e^{y(e^y-1)}-1}{1-e^{y}+y}$$</p> <p>Now, by the definition of the derivative $$\lim_{z \to 0}\frac{e^z-1}{z}=1$$</p> <p>Replacing $z=y(e^y-1)$ you get $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}=1$$</p> <p>Therefore $$\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{1+y-e^y}=\lim_{y \to 0}\frac{e^{y(e^y-1)}-1}{y(e^y-1)}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}$$</p> <p>This last limit is much easier and can be calculated easily, either with power series, or by calculating $$ \lim_{y \to 0}\frac{y(e^y-1)}{1+y-e^y}=\lim_{y \to 0}\frac{y(e^y-1)}{y^2}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{e^y-1}{y}\frac{y^2}{1+y-e^y}=\lim_{y \to 0}\frac{y^2}{1+y-e^y} $$ which is pretty standard.</p> Answer 63: <p>You can collect a factor $x$ from the numerator and then do $x=t+1$, so the limit becomes $$ \lim_{t\to0}(1+t)\frac{(t+1)^t-1}{\log(1+t)-t} $$ (surely Euler didn't use “ln”).</p> <p>The factor $1+t$ contributes $1$, so we can concentrate on the fraction. The numerator is $e^{t\log(t+1)}-1$ and can be expanded like $$ 1+t\log(1+t)+o(t\log(1+t))-1=t^2+o(t^2) $$ whereas the denominator is $$ \log(1+t)-t=t-\frac{t^2}{2}+o(t^2)-t=-\frac{t^2}{2}+o(t^2) $$ so the limit is $-2$.</p> Answer 64: <p>This is a footnote about the method used in some of the answers already given.</p> <p>The functions $f(x)=x^x-x$ and $g(x)=1-x+\ln x$ are infinitely differentiable for $x&gt;0$. So $f'''$ and $g'''$ are continuous for $x&gt;0.$</p> <p>So $A=\sup \{|f'''(x)|:x\in [1/2,2]\}&lt;\infty$ and $B=\sup \{|g'''(x)|:x\in [1/2,2]\}&lt;\infty.$ </p> <p>Let $M=\max (A,B).$ </p> <p>For $1+y\in [1/2,2]$ we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2!+y^3f'''(c)/3!$$ for some $c$ such that $|c|\leq |y|.$ So $$|y^3f'''(c)/3!|\leq |y|^3M/3!.$$ Similarly for $g.$ The key is that $ M&lt;\infty$ so we have $$f(1+y)=f(1)+yf'(1)+y^2f''(1)/2+O(y^3)$$ as $y\to 0.$ </p> <p>The choice of the interval $[1/2,2]$ is arbitrary. We could instead take any $[a,b]$ with $0&lt;a&lt;1&lt;b&lt;\infty.$</p>
On the integral $\int\sqrt{1+\cos x}\text{d} x$
2,117,178
8
https://math.stackexchange.com/questions/2117178/on-the-integral-int-sqrt1-cos-x-textd-x
<p>I've been wondering about the following integral recently: $$ I = \int\sqrt{1+\cos x}\,\text{d}x $$ The way I integrated it is I used the identity $\sqrt{1+\cos x} = \sqrt2\cos\frac x2$, and so $$ I = 2\sqrt2\sin\frac x2 + C $$</p> <p>the problem is that $\sqrt{1+\cos x}$ is actually integrable over the entire real line, but the derivative of $2\sqrt2\sin\frac x2$ is only equal to $\sqrt{1+\cos x}$ in certain intervals. This is because the <em>actual</em> identity is $\sqrt{1+\cos x} = \sqrt2\left|\cos\frac x2\right|$. Now, I wasn't exactly sure how to integrate the absolute value, so I thought I would "fix" the function after integration.</p> <p>The first fix we can do is making sure that the sign of the result of integration is correct: $$ I = 2\sqrt2\sin\frac x2\text{sgn}\cos\frac x2 + C $$ The problem with just $\sin\frac x2$ was that sometimes it was "flipped", on certain intervals it was actually the antiderivative of the $-\sqrt{1+\cos x}$ (this is because we dropped the absolute value signs).</p> <p>There is one further problem with this, however. The above function is not continuous, meaning it's not continuously differentiable with derivative equal to $\sqrt{1+\cos x}$ everywhere. Namely, it is discontinuous at $x=(4n\pm1)\pi$ where $n\in\mathbb{Z}$.</p> <p>I noticed, however, that the limit of the derivative on either side of $x=(4n\pm1)\pi$ existed and was equal to each other. Hence, I figured that I could somehow "stitch" these continuous sections end to end and get a continuous result whose derivative was $\sqrt{1+\cos x}$ everywhere. The resulting function I got was $$ I = 2\sqrt2\sin\frac x2\text{sgn}\cos\frac x2 + 4\sqrt2\left\lfloor\frac1{2\pi}x+\frac12\right\rfloor + C $$</p> <p>Now, I was wondering, is there any way to arrive at this result just by integrating $\sqrt{1+\cos x}$ using the usual techniques? The method I used can be boiled down to "integrating" then "fixing", but I'm just wondering if you can arrive at a result that is continuous and differentiable on the entire real line by doing <em>just</em> the "integrating" part.</p> <p>Any help would be appreciated. Thanks!</p> <p><strong>Edit:</strong> To be clear, I'm not looking for <em>exactly</em> the function above, but rather simply any function that is "nice", continuous and differentiable on $\mathbb{R}$, and has derivative equal to $\sqrt{1+\cos x}$ everywhere, and which is attainable through "simple" integration methods.</p>
[ { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" }, { "answer_id": 2117194, "score": 2, "text": "<p>The function to be integrated is\n$$\n\\sqrt{1+\\cos x}=\\sqrt{2}\\left|\\cos\\frac{x}{2}\\right|\n$$\nso we can as well consider the simpler\n$$\n\\int\\lvert\\cos t\\rvert\\,dt\n$$\nor, with $t=u+\\pi/2$, the even simpler\n$$\n\\int|\\sin u|\\,du\n$$\nOne antiderivative is\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv\n$$\nNote that the function has period $\\pi$, so let $u=k\\pi+u_0$, with $0\\le u_0&lt;\\pi$. Then\n$$\n\\int_0^u\\lvert\\sin v\\rvert\\,dv=\n\\int_0^{k\\pi}\\lvert\\sin v\\rvert\\,dv+\n\\int_{k\\pi}^{k\\pi+u_0}\\lvert\\sin v\\rvert\\,dv\n\\\\=2k+\\int_0^{u_0}\\sin v\\,dv=\n2k+1-\\cos(u-k\\pi)\n$$\nNow do the back substitution; write $k=u\\bmod\\pi$, if you prefer.</p>\n" }, { "answer_id": 2117193, "score": 1, "text": "<p>Probably not what you intended, but it follows geometrically due to the period of $\\cos$ that</p>\n\n<p>$$\\int_0^x\\sqrt{1+\\cos(t)}\\ dt=(x\\text{ mod }2\\pi)\\int_0^{2\\pi}\\sqrt{1+\\cos(t)}\\ dt+\\int_0^{(x\\text{ mod }2\\pi)}\\sqrt{1+\\cos(t)}\\ dt$$</p>\n\n<p>That is, we really only need to take the integral for $x\\in[0,2\\pi)$ and we're set. To do this, let $t=\\arccos(u)$ with $x\\in[0,2\\pi)$.</p>\n\n<p>$$\\begin{align}\\int_0^x\\sqrt{1+\\cos(t)}\\ dt&amp;=-\\int_1^{\\cos(x)}\\sqrt{\\frac{1+u}{1-u^2}}\\ du\\\\&amp;=\\int_{\\arccos(x)}^1\\frac1{\\sqrt{1-u}}\\ du\\\\&amp;=\\left.-2\\sqrt{1-u}\\vphantom{\\frac11}\\right|_{\\cos(x)}^1\\\\&amp;=2\\sqrt{1-\\cos(x)}\\end{align}$$</p>\n\n<p>which reduces to yours via trig identities. The extension to $x\\ge2\\pi$ via modular operations is also continuous, and the derivative is always $\\sqrt{1+\\cos(t)}$.</p>\n" }, { "answer_id": 4769941, "score": 0, "text": "<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\int \\frac{1+\\cos(x)}{\\sqrt{1+\\cos(x)}} dx\n\\end{equation}</span></p>\n<p>Consider the substitution, <span class=\"math-container\">$u=1+\\cos(x)$</span>. Then,</p>\n<p><span class=\"math-container\">\\begin{equation}\n-\\frac{du}{\\sqrt{2u-u^2}}=dx\n\\end{equation}</span></p>\n<p>Thus, the new equation in terms of <span class=\"math-container\">$u$</span> will be</p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{\\sqrt{u}}{\\sqrt{2u-u^2}} du\n\\end{equation}</span></p>\n<p><span class=\"math-container\">\\begin{equation}\nI= -\\int \\frac{du}{\\sqrt{2-u}}\n\\end{equation}</span></p>\n<p>With the substitution, <span class=\"math-container\">$t=2-u$</span>, we get</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=\\int \\frac{dt}{\\sqrt{t}}\n\\end{equation}</span></p>\n<p>By making substitutions for <span class=\"math-container\">$t(u)$</span> and <span class=\"math-container\">$u(x)$</span>, the final answer becomes</p>\n<p><span class=\"math-container\">\\begin{equation}\nI=2\\sqrt{1-\\cos(x)}+C.\n\\end{equation}</span></p>\n<p>Multiplying and dividing by <span class=\"math-container\">$\\sqrt{1+\\cos(x)}$</span> results in</p>\n<p><span class=\"math-container\">\\begin{equation}\n\\int \\sqrt{1+\\cos(x)} dx = \\frac{2 \\sqrt{{\\sin}^2(x)}}{\\sqrt{1+\\cos(x)}}+C.\n\\end{equation}</span></p>\n<p>However only when the term in numerator is taken to be <span class=\"math-container\">$\\sin(x)$</span> instead of absolute value of <span class=\"math-container\">$\\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>\n" } ]
Question: On the integral $\int\sqrt{1+\cos x}\text{d} x$ Answer 1: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 2: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 3: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 4: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 5: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 6: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 7: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 8: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 9: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 10: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 11: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 12: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 13: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 14: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 15: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 16: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 17: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 18: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 19: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 20: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 21: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 22: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 23: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 24: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 25: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 26: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 27: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 28: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 29: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 30: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 31: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 32: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 33: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 34: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 35: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 36: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 37: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 38: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 39: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 40: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 41: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 42: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 43: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 44: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 45: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p> Answer 46: <p>The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0&lt;\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.</p> Answer 47: <p>Probably not what you intended, but it follows geometrically due to the period of $\cos$ that</p> <p>$$\int_0^x\sqrt{1+\cos(t)}\ dt=(x\text{ mod }2\pi)\int_0^{2\pi}\sqrt{1+\cos(t)}\ dt+\int_0^{(x\text{ mod }2\pi)}\sqrt{1+\cos(t)}\ dt$$</p> <p>That is, we really only need to take the integral for $x\in[0,2\pi)$ and we're set. To do this, let $t=\arccos(u)$ with $x\in[0,2\pi)$.</p> <p>$$\begin{align}\int_0^x\sqrt{1+\cos(t)}\ dt&amp;=-\int_1^{\cos(x)}\sqrt{\frac{1+u}{1-u^2}}\ du\\&amp;=\int_{\arccos(x)}^1\frac1{\sqrt{1-u}}\ du\\&amp;=\left.-2\sqrt{1-u}\vphantom{\frac11}\right|_{\cos(x)}^1\\&amp;=2\sqrt{1-\cos(x)}\end{align}$$</p> <p>which reduces to yours via trig identities. The extension to $x\ge2\pi$ via modular operations is also continuous, and the derivative is always $\sqrt{1+\cos(t)}$.</p> Answer 48: <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \int \frac{1+\cos(x)}{\sqrt{1+\cos(x)}} dx \end{equation}</span></p> <p>Consider the substitution, <span class="math-container">$u=1+\cos(x)$</span>. Then,</p> <p><span class="math-container">\begin{equation} -\frac{du}{\sqrt{2u-u^2}}=dx \end{equation}</span></p> <p>Thus, the new equation in terms of <span class="math-container">$u$</span> will be</p> <p><span class="math-container">\begin{equation} I= -\int \frac{\sqrt{u}}{\sqrt{2u-u^2}} du \end{equation}</span></p> <p><span class="math-container">\begin{equation} I= -\int \frac{du}{\sqrt{2-u}} \end{equation}</span></p> <p>With the substitution, <span class="math-container">$t=2-u$</span>, we get</p> <p><span class="math-container">\begin{equation} I=\int \frac{dt}{\sqrt{t}} \end{equation}</span></p> <p>By making substitutions for <span class="math-container">$t(u)$</span> and <span class="math-container">$u(x)$</span>, the final answer becomes</p> <p><span class="math-container">\begin{equation} I=2\sqrt{1-\cos(x)}+C. \end{equation}</span></p> <p>Multiplying and dividing by <span class="math-container">$\sqrt{1+\cos(x)}$</span> results in</p> <p><span class="math-container">\begin{equation} \int \sqrt{1+\cos(x)} dx = \frac{2 \sqrt{{\sin}^2(x)}}{\sqrt{1+\cos(x)}}+C. \end{equation}</span></p> <p>However only when the term in numerator is taken to be <span class="math-container">$\sin(x)$</span> instead of absolute value of <span class="math-container">$\sin(x)$</span> the answer seems to be correct. The conclusion is drawn based on plotting the derivative of answer and comparing it with integrand.</p>
Galois groupoid of covering map: are endomorphisms of universal covering spaces automorphisms?
2,078,998
7
https://math.stackexchange.com/questions/2078998/galois-groupoid-of-covering-map-are-endomorphisms-of-universal-covering-spaces
<p>Suppose we're dealing with nice spaces which are the coproducts of their connected components. Then there's a functor $\Pi_0:\mathsf{Top}\rightarrow \mathsf{Set}$ taking a space to its set of connected components.</p> <p>Let $p:E\to B$ be a covering map. The Galois "groupoid" of $p$ $$\Pi_0(E\times _B E\times _B E)\rightarrow \Pi_0(E\times _BE) \rightrightarrows \Pi_0(E)$$ is defined as the image along $\Pi _0$ of $$E\times _B E\times _B E\rightarrow E\times _BE \rightrightarrows E$$ where the left arrow forgets the middle element of a triple $(x,y,z)$, and the arrows on the right are the pullback projections (the unit arrow is given by the diagonal).</p> <p>Theorem 6.7.4 of Borceux and Janelidze's <em>Galois Theories</em> says that if $p$ is a universal covering map with connected $E$ (and therefore $B$), then the Galois groupoid is a group isomorphic to $\mathsf{Aut}(p)$ - the group of automorphisms of $p$ (over $B$).</p> <p>I tried to calculate the Galois "groupoid" of a covering map with connected $E$ (and therefore $B$) without any further assumptions and found it isomorphic to $\mathsf{End}(p)$:</p> <p>The connectedness of $E$ is equivalent to $\Pi_0(E)=\bf 1$ so there's only one object and the "groupoid" <s>is indeed a group</s> has a single object. To calculate $\Pi_0(E\times _BE)$ we use the distributivity of the category of spaces, the fact covering maps have discrete fibers, and the fact $\Pi_0$ is left adjoint to discrete spaces. This shows $$\Pi_0(E\times _BE)\cong \Pi_0(\coprod_b(p^{-1}\left\{ b\right\}\times p^{-1}\left\{ b\right\} ))\cong \coprod_b \Pi_0(p^{-1}\left\{ b\right\}\times p^{-1}\left\{ b\right\}).$$</p> <p>Now suppose the fibers are of size $n$. Then $|\Pi_0(p^{-1}\left\{ b\right\}\times p^{-1}\left\{ b\right\})|=n^2$, and it seems like $\Pi_0(E\times _BE)$ is comprised of endomorphisms of $p$.</p> <p>Is this reasoning correct? Does it imply that endomorphisms of <em>universal</em> covering maps are automorphisms? What's the intuition for this?</p> <p><strong>Added.</strong> My original formulation was misleading: the Galois "groupoid" is not generally a groupoid unless $p$ is a principal bundle. I think it's just an internal category in general. (This sits well with the fact $\Pi_0(E\times _BE)$ is not a group for a general covering map.)</p>
[ { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" }, { "answer_id": 2079699, "score": 1, "text": "<p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p>\n\n<hr>\n\n<p>Explicitly, in any category $\\mathcal C$, the double and triple pullback of a morphism $E\\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p>\n\n<ol>\n<li>The internal domain and codomain maps are the natural projections $E_{p_1}\\times E_{p_2}\\underset{\\pi_2}{\\overset{\\pi_1}\\rightrightarrows} E$. Here $p_1,p_2$ are both $E\\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\\pi_1\\circ p_1=\\pi_2\\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\\xrightarrow{\\Delta}E_{p_1}\\times E_{p_2}$ uniquely satisfying $\\pi_1\\circ\\Delta=\\mathrm{id}=\\pi_2\\circ\\Delta$.</li>\n<li>The partially-defined composition map has domain the pullback of $E_{p_1}\\times E_{p_2}\\xrightarrow{\\pi_2}E$ along $E_{p_3}\\times E_{p_4}\\xrightarrow{\\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\xrightarrow{(\\pi_1\\circ\\epsilon_1,\\pi_4\\circ\\epsilon_2)}E_p\\times E_p$ where $(E_{p_1}\\times E_{p_2})_{\\pi_2}\\times(E_{p_3}\\times E_{p_4})_{\\pi_3}\\underset{\\epsilon_2}{\\overset{\\epsilon_1}\\rightrightarrows} E_p\\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li>\n<li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\\times E_{p_2}\\xrightarrow{(\\pi_2,\\pi_1)}E_{p_1}\\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li>\n</ol>\n\n<p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\\{0,1,\\dots,\\}$.</p>\n\n<hr>\n\n<p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\\Pi_0$ for $E\\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\\Pi_0$. Being descent means that $E\\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p>\n\n<p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\\Pi_0$.</p>\n\n<p>The point then is that $\\Pi_0(E\\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\\Pi_0$ by $E\\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p>\n" }, { "answer_id": 3052112, "score": 0, "text": "<p>A <em>weakly initial</em> covering morphism of <span class=\"math-container\">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A <em>weakly universal covering morphism</em> of <span class=\"math-container\">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class=\"math-container\">$B$</span>.</p>\n\n<p>A covering morphism is <em>connected</em> if it is inverted by <span class=\"math-container\">$\\Pi_0$</span>.</p>\n\n<p><strong>Theorem.</strong> If <span class=\"math-container\">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>\n" } ]
Question: Galois groupoid of covering map: are endomorphisms of universal covering spaces automorphisms? Answer 1: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 2: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 3: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 4: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 5: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 6: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 7: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 8: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 9: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 10: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 11: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 12: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 13: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 14: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 15: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 16: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 17: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 18: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 19: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 20: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 21: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 22: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 23: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 24: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 25: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 26: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 27: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 28: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 29: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 30: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p> Answer 31: <p>Your definition of the Galois groupoid is incomplete -- it is supposed to be an internal groupoid not an internal category, cf. Example 4.6.2 and Lemma 5.1.22. </p> <hr> <p>Explicitly, in any category $\mathcal C$, the double and triple pullback of a morphism $E\xrightarrow{p}B$ naturally equips $E$ with the structure of an <strong>internal groupoid</strong> as follows.</p> <ol> <li>The internal domain and codomain maps are the natural projections $E_{p_1}\times E_{p_2}\underset{\pi_2}{\overset{\pi_1}\rightrightarrows} E$. Here $p_1,p_2$ are both $E\xrightarrow{p}B$, and the natural projections are the ones universally satisfying $\pi_1\circ p_1=\pi_2\circ p_2$. The identity map which is required to split both the domain and codomain is the diagonal $E\xrightarrow{\Delta}E_{p_1}\times E_{p_2}$ uniquely satisfying $\pi_1\circ\Delta=\mathrm{id}=\pi_2\circ\Delta$.</li> <li>The partially-defined composition map has domain the pullback of $E_{p_1}\times E_{p_2}\xrightarrow{\pi_2}E$ along $E_{p_3}\times E_{p_4}\xrightarrow{\pi_3}E$ (meaning: combines a pair of morphisms so that the codomain of the first matches the domain of the second). The composition is given by $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\xrightarrow{(\pi_1\circ\epsilon_1,\pi_4\circ\epsilon_2)}E_p\times E_p$ where $(E_{p_1}\times E_{p_2})_{\pi_2}\times(E_{p_3}\times E_{p_4})_{\pi_3}\underset{\epsilon_2}{\overset{\epsilon_1}\rightrightarrows} E_p\times E_p$ are the projection morphisms of the domain of the composition map. In other words, the composition map is the unique one that combines the two morphisms into a morphism by eliminating their matching codomain and domain, and keeping the resulting domain and codomain.</li> <li>There is also an inversion map which inverts morphisms. It is given by $E_{p_1}\times E_{p_2}\xrightarrow{(\pi_2,\pi_1)}E_{p_1}\times E_{p_2}$, i.e. it is the unique morphism that takes a morphism and spits out a morphism with switched domain and codomain.</li> </ol> <p>If you are so inclined, you can check by hand that all the various morphisms satisfy the equations of an internal groupoid, or more systematically you can relate iterated pullbacks to a limit-preserving functor from the categories of finite subsets of $\{0,1,\dots,\}$.</p> <hr> <p>The <strong>Galois groupoid</strong> is the image of the internal groupoid so described under the functor $\Pi_0$ for $E\xrightarrow{p}B$ a morphism of relative Galois descent for the functor $\Pi_0$. Being descent means that $E\xrightarrow{p}B$ makes the pullback functor of a category of morphisms over $B$ monadic over a category of morphisms over $E$ (so it identifies morphisms over $B$ with well-structured morphisms over $E$).</p> <p>Being relative Galois descent additionally requires that certain morphisms to $B$ are split relative to $\Pi_0$ by $E\xrightarrow{p}B$, the morphisms and the notion of splitting arising naturally from the existence of a right adjoint to $\Pi_0$.</p> <p>The point then is that $\Pi_0(E\times_B E)$ <em>always</em> has the structure of an internal groupoid on objects $\Pi_0(E)$. However, it is only in the case when the morphism is of relative Galois descent that this groupoid is a well-behaved in the sense that there is an equivalence of categories between internal presheaves for this groupoid and <em>all</em> morphisms to $B$ split relative to $\Pi_0$ by $E\xrightarrow{p}B$; in the case where the groupoid is a group the equivalence is between $G$-sets and such morphisms, a special case of $G$-sets being normal subgroups.</p> Answer 32: <p>A <em>weakly initial</em> covering morphism of <span class="math-container">$B$</span> is a weakly initial object in the category of effective descent covering morphisms of <span class="math-container">$B$</span>.</p> <p>A <em>weakly universal covering morphism</em> of <span class="math-container">$B$</span> is a covering morphism which trivializes all other covering morphisms of <span class="math-container">$B$</span>.</p> <p>A covering morphism is <em>connected</em> if it is inverted by <span class="math-container">$\Pi_0$</span>.</p> <p><strong>Theorem.</strong> If <span class="math-container">$B$</span> admits a weakly universal covering, then any endomorphism of a connected weakly initial covering morphisms is an automorphism.</p>
The meaning of Material in Material Implication
2,079,002
7
https://math.stackexchange.com/questions/2079002/the-meaning-of-material-in-material-implication
<p>What does "material" mean when one talks of the "material implication"? Why call it "material" implication?</p>
[ { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" }, { "answer_id": 2079066, "score": 4, "text": "<p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p>\n\n<p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p>\n\n<p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p>\n\n<p>A material statement is used to describe anything made up of molecular or elementary statements. </p>\n\n<p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an \"if\" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p>\n" }, { "answer_id": 2079012, "score": 2, "text": "<p>Dictionary.com says that the origin of \"material implication\" is ca. 1900.</p>\n\n<p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p>\n\n<ul> \n<li> Charles Sanders Peirce\n<li> Bertrand Russell\n<li> Ludwig Wittgenstein\n</ul>\n\n<p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p>\n\n<p>A starting point to get to the answer might be arXiv:1108.2429 \nby Irving H. Anellis (2011)</p>\n\n<blockquote>\n <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a\n truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript\n by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s \"On the Algebra of Logic: A Contribution to the Philosophy of Notation\" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p>\n</blockquote>\n" }, { "answer_id": 3153908, "score": 2, "text": "<p>The adjective \" material\" in \" material implication\" refers to the content (i.e. \"matter\") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from \"logical implication\" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p>\n\n<p>Remark : in ancient philosophy , \" content\" was identified to \"matter\" and \"matter\" was opposed to \" form\" </p>\n\n<p>Remark : according to Frege the \"meaning\" of a proposition has two aspects, (1) the \" sense\" ( conceptual aspect) and the (2) \" denotation\" ( or reference, which Frege identifies to truth-value). \"1²+ sqrt(1)= sqrt(4)\" has the same denotation as \" 1+1=2\" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p>\n\n<p>Remark: the terms \"material\" and \" formal\" were used in medieval logic to distinguish two kinds of \" consequences\". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a \" consequance\" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p>\n\n<p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p>\n\n<p>Example. Suppose someone asks you \" Does A imply (materially) (A&amp;B)\"? You wouldn't know what annswer to give, and would say \" that depends on the truth values A and B are given in the case you are considering\". Suppose now someone asks you: \" Does (A&amp;B) logically imply A?\". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer \" Yes!\". </p>\n\n<p>Example : Lets's say that A is the proposition \" America is a democracy in 2019\" and that B is the proposition \" D. Trump is the President of the USA in 2019\". </p>\n\n<p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p>\n\n<p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p>\n\n<pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)]\n</code></pre>\n\n<p>that is </p>\n\n<pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, \n in which A is True and B is False] \n</code></pre>\n\n<p>Saying that \" USA is a democracy (in 2019)\" logically implies that \"D. trump is President of the USA in 2019\" means: \" Necessarily, if America is a democracy, then D. Trump is the President of the USA\", which is absolutely false. There is no logical impossibility in \" America is a democracy and D. trump is NOT the President of the USA\". </p>\n\n<p>In summary : \nA materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p>\n\n<p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible \"interpretation\") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p>\n\n<p>Remark : there is a weaker sense of \"logical implication\": necessary implication that, however, is not formal. for example I can correctly say that \" John is a pianist\" logically implies \" John is a musician\". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms \" pianist\" and \" musician\". It is not true that any proposition of the form \" b is a P\" logically implies any proposition of the form \" b is a M\". </p>\n" } ]
Question: The meaning of Material in Material Implication Answer 1: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 2: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 3: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 4: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 5: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 6: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 7: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 8: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 9: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 10: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 11: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 12: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 13: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 14: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 15: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 16: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 17: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 18: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 19: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 20: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 21: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 22: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 23: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 24: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 25: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 26: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 27: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 28: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 29: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 30: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 31: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 32: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 33: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 34: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 35: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 36: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 37: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 38: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 39: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 40: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 41: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 42: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p> Answer 43: <p>The term material comes from references to Alfred North Whitehead and Bertrand Russell's work (Russell, B. (1963). Principia Mathematica Volume 1. Cambridge, At the University Press.) He used terms such as molecular, elementary and atomic statements to describe structures in logic. The term which refers to statements that are at the bottom is atomic statements. They are what you would obtain if you kept expanding a statement until you could expand it no more. You have reached statements which have a value of either true or false. An example may help:</p> <p>Let say we have a few statements A and B. If A and B are atomic and we choose to rename the statement A AND B to C, we call C a molecular statement. C is labeled such because it is made up of two different atomic statements. </p> <p>If we have a different statement, say A|A , we may rename this to D. In this case D would be an elementary statement because only 1 type of atomic statement is used to make it up.</p> <p>A material statement is used to describe anything made up of molecular or elementary statements. </p> <p>Because atomic statements all have truth values; this means that a material statement has a truth value. They can be put onto truth tables. A material implication is an "if" statement that is made out of statements with truth values. This contrasts to a logical implication which is made out of other mathematical structures. These structures may not have truth values of their own.</p> Answer 44: <p>Dictionary.com says that the origin of "material implication" is ca. 1900.</p> <p>Thus a good answer to the question <em>why call it material implication</em> is quite likely somewhere in the works of</p> <ul> <li> Charles Sanders Peirce <li> Bertrand Russell <li> Ludwig Wittgenstein </ul> <p>(Particularly we should take a close look at early terminology of Peirce, Russell, Wittgenstein.)</p> <p>A starting point to get to the answer might be arXiv:1108.2429 by Irving H. Anellis (2011)</p> <blockquote> <p>The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883-84 in connection with the composition of Peirce’s "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in the American Journal of Mathematics in 1885 includes an example of an indirect truth table for the conditional.</p> </blockquote> Answer 45: <p>The adjective " material" in " material implication" refers to the content (i.e. "matter") , that is to the meaning, and more precisely to the denotation (reduced to truth value since Frege) of the propositions involved. It is distinguished from "logical implication" which, strictly speaking, is purely formal, that is, holds whatever the truth values of the atomic propositions involved might be. </p> <p>Remark : in ancient philosophy , " content" was identified to "matter" and "matter" was opposed to " form" </p> <p>Remark : according to Frege the "meaning" of a proposition has two aspects, (1) the " sense" ( conceptual aspect) and the (2) " denotation" ( or reference, which Frege identifies to truth-value). "1²+ sqrt(1)= sqrt(4)" has the same denotation as " 1+1=2" ( that is, the truth value True) , but the two propositions don't have the same sense ( they do not express the same thought). </p> <p>Remark: the terms "material" and " formal" were used in medieval logic to distinguish two kinds of " consequences". But the two distinctions are not equivalent, since in medieval logic, the relation involved in a " consequance" was always a necessary one. In modern logic, a material implication is by itself CONTINGENT (depending on the contingent truth values of the propositions) while logical implication is a NECESSARY relation. </p> <p>Reference: On this topic see: Lipschutz, Schaum's Outline Of Set theory and Related Topics, chapter 14. - Available at Archive.org. </p> <p>Example. Suppose someone asks you " Does A imply (materially) (A&amp;B)"? You wouldn't know what annswer to give, and would say " that depends on the truth values A and B are given in the case you are considering". Suppose now someone asks you: " Does (A&amp;B) logically imply A?". Here, without hesitation, and whithout even knowing the truth values of A and of B, you would answer " Yes!". </p> <p>Example : Lets's say that A is the proposition " America is a democracy in 2019" and that B is the proposition " D. Trump is the President of the USA in 2019". </p> <p>Since both A and B are true in the actual world ( although one of them could logically be false, or both), it is actually the case that A MATERIALLY implies B. ( This can be seen on the truth table of material implication). The only case in which A would not imply ( materially) B is the case in which A would be true while B would be false. But notice that this is contingent: it could logically be the case that the first were true, and the second false. </p> <p>Now, although A materially implies B, A does not LOGICALLY imply B. Logical implication is (1) material implication (2) WITH necessity. Lets's use the symbol ==> for logical implication, and the symbol --> for material implication. </p> <pre><code> [ A ==&gt; B ] if and only if [NECESSARILY ( A--&gt;B)] </code></pre> <p>that is </p> <pre><code> [ A ==&gt; B] if and only if [ there is no logically possible case, in which A is True and B is False] </code></pre> <p>Saying that " USA is a democracy (in 2019)" logically implies that "D. trump is President of the USA in 2019" means: " Necessarily, if America is a democracy, then D. Trump is the President of the USA", which is absolutely false. There is no logical impossibility in " America is a democracy and D. trump is NOT the President of the USA". </p> <p>In summary : A materially implies B , just if ( contingently) we are not in a case where A is true and B is false. Material implication depends on the truth values propositions have ( contingently) in the actual world. </p> <p>A logically implies B if and only if there is no logically possible case ( no possible world as logicians/philosophers say , or, no possible "interpretation") in which A is true and B is false. A logical implication is an implication that has truth-value ' T ' on every row of it's truth table. </p> <p>Remark : there is a weaker sense of "logical implication": necessary implication that, however, is not formal. for example I can correctly say that " John is a pianist" logically implies " John is a musician". The implication is necessary since there is no possible case in which a person is a pianist without being a musician. But this is not pure ( formal) logical implication, since the necessity depends on the meaning of the terms " pianist" and " musician". It is not true that any proposition of the form " b is a P" logically implies any proposition of the form " b is a M". </p>
Fancy slash thing?
2,079,062
7
https://math.stackexchange.com/questions/2079062/fancy-slash-thing
<p>I'm an early high school student so I haven't yet encountered the symbol I am looking for in school, but what is the symbol that looks kind of like a big " / " but with curled ends called? Sometimes it has numbers/variables above or below it. I just need to know what it's called so I can research it on my own. I think it's used in calculus... maybe?</p>
[ { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" }, { "answer_id": 2079076, "score": 11, "text": "<p>The symbol $\\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p>\n\n<p>$$\\int 2x \\, dx = x^2 + C$$</p>\n\n<p>$$\\int_0^2 x \\, dx = 2$$</p>\n\n<p>$$\\int_\\mathbb{R} \\phi(x) \\, dx = 1$$</p>\n\n<p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\\int f(x,y) \\,dx \\neq \\int f(x,y) \\,dy$, but $\\int f(x) \\,dx = \\int f(t) \\,dt$.</p>\n\n<p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p>\n\n<p>You'll see multiple integral signs $\\iint$ for an integral with multiple differentials,</p>\n\n<p>$$\\iint f(x,y) \\,dy \\,dx,$$</p>\n\n<p>or a closed integral sign $\\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p>\n" }, { "answer_id": 2079132, "score": 4, "text": "<p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p>\n\n<p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\\int_{-\\infty}^\\infty\\frac1{1+x^2}dx$$You can <a href=\"https://desmos.com/calculator\" rel=\"nofollow noreferrer\">graph it here</a>. It can be shown that this is equal to $\\pi$!</p>\n\n<p>On the other hand:\n$$\\int_{-1}^1\\frac1{1+x^2}dx$$\ncan be shown to be equal to only $\\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p>\n\n<p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p>\n\n<p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called \"derivatives,\" which you learn more about in calculus.</p>\n" } ]
Question: Fancy slash thing? Answer 1: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 2: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 3: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 4: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 5: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 6: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 7: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 8: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 9: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 10: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 11: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 12: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 13: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 14: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 15: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 16: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 17: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 18: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 19: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 20: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 21: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 22: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 23: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 24: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 25: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 26: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 27: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 28: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p> Answer 29: <p>The symbol $\int$ is known as the <strong>integral sign</strong>. It's used in calculus to denote integration:</p> <p>$$\int 2x \, dx = x^2 + C$$</p> <p>$$\int_0^2 x \, dx = 2$$</p> <p>$$\int_\mathbb{R} \phi(x) \, dx = 1$$</p> <p>As you can see, it's always followed by a function, and then a differential like $dx$. This indicates which variable is being used in the integration; for instance, $\int f(x,y) \,dx \neq \int f(x,y) \,dy$, but $\int f(x) \,dx = \int f(t) \,dt$.</p> <p>Without numbers/variables, it represents the indefinite integral (which is another function); with two limits or with one set as superscripts or subscripts, it denotes the definite integral (which is a number).</p> <p>You'll see multiple integral signs $\iint$ for an integral with multiple differentials,</p> <p>$$\iint f(x,y) \,dy \,dx,$$</p> <p>or a closed integral sign $\oint$ (not sure if that's the correct name) for an integral over a closed curve, surface, etc. Don't expect to see either of those in high school calculus.</p> Answer 30: <p>It means the area under a graph (or, rather, between the graph and the $x$-axis).</p> <p>For example, the area between the graph of $y=1/(1+x^2)$ and the $x$-axis would be written like this:$$\int_{-\infty}^\infty\frac1{1+x^2}dx$$You can <a href="https://desmos.com/calculator" rel="nofollow noreferrer">graph it here</a>. It can be shown that this is equal to $\pi$!</p> <p>On the other hand: $$\int_{-1}^1\frac1{1+x^2}dx$$ can be shown to be equal to only $\pi/2$. Here, the $-1$ and $1$ mean we only consider a portion of the area, between the vertical lines $x=-1$ and $x=1$.</p> <p>From a physics standpoint, if $v(t)$ is the velocity of a particle in terms of time (which may change over time, e.g. if the particle speeds up or slows down), it can be shown that $\int_a^bv(t)dt$ is equal to the distance between where the particle is at time $t=a$ and where it is at time $t=b$. This, and results like it, make integrals incredibly useful for physics and engineering.</p> <p>The most powerful tool for computing integrals is the <em>Fundamental Theorem of Calculus</em>, which relates integrals to something called "derivatives," which you learn more about in calculus.</p>
Connection Between the Traditional Equation of a Line ($y = mx + c$) and the Vector Equation of a Line
2,079,446
7
https://math.stackexchange.com/questions/2079446/connection-between-the-traditional-equation-of-a-line-y-mx-c-and-the-vec
<p>My textbook says, "... an equation of the line through $A$ and $B$ is $x = u + tw = u + t(v — u)$, where $t$ is a real number and $x$ denotes an arbitrary point on the line. An illustration of this is shown below:</p> <p><a href="https://i.stack.imgur.com/GePLT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GePLT.png" alt="See Image"></a></p> <p>Since $x = u + tw$ is the equation of a line, I presume that it is equivalent to the equation $y = mx + c$, where $m$ is the slope, $x$ is the x-intercept (when $y = 0$), and $c$ is the $y-$intercept (when $x = 0$). Therefore, in the equation $x = u + tw$, $w$ must represent the $x-$intercept $($when $x = 0), u$ must represent the y-intercept (when $w = 0$), and $t$ must represent the slope. </p> <p>It is clear that there is a connection between the traditional equation of a line, $y = mx + c$, and the vector equation of a line, $x = u + tw = u + t(v — u)$. However, I am struggling to clearly understand the connection between the vectors and the traditional equation:</p> <p><strong>How can these vectors represent slope ($m$)?</strong></p> <p><strong>How can these vectors represent x- and y-intercepts?</strong></p> <p>When I look at the diagram and the vector equations of a line, there does not seem to be a clear indication of how they relate to the cartesian illustration of the traditional equation of a line ($y = mx + c$). For instance, the vectors $w$ and $u$ in $x = u + tw$ would represent the $x$ and $y$ intercepts; <strong>But how can a vector represent the $x$ or $y$ intercepts?</strong> This makes no sense to me.</p> <p>To provide further context, take the vector equation of a line $x = (-2,0.1)-t(6,5,2)$. In this case, $(-2,0.1) = u$ and $(6,5,2) = w$; But in what way do the vectors $u$ and $w$ represent $y$ and $x$ intercepts? This is not clear to me.</p> <p>I would greatly appreciate it if the knowledgeable members of MSE could please take the time to clarify the connection between these two forms of a line. </p> <p>Thank you.</p>
[ { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" }, { "answer_id": 2079513, "score": 7, "text": "<p>The vector equation $\\vec{X}(t) = \\vec{U} + t\\vec{W}$ is really a shorthand for two equations:\n\\begin{align}\nx &amp;= u_x + t w_x \\\\\ny &amp;= u_y + t w_y \\\\\n\\end{align}\nwhere $\\vec{U}$ has components $(u_x,u_y)$ and $\\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p>\n\n<p>However, for 2D lines, the equations are related: from the first equation above, you can get\n$$\nt = \\frac{x - u_x}{w_x}\n$$\nThen, if you substitute this in the second equation, you get\n$$\ny = u_y + \\frac{x - u_x}{w_x} w_y \n= \\left( \\frac{w_y}{w_x} \\right)x + \\left( u_y - \\frac{u_xw_y}{w_x} \\right) \n$$ \nand this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\\frac{w_y}{w_x}$. </p>\n\n<p>If $w_x = 0$, you get a vertical line (with \"infinite\" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p>\n" }, { "answer_id": 2079473, "score": 3, "text": "<p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p>\n\n<p>In vector notation we have $\\vec{X}=\\vec{u}+t\\,\\vec{w}$, with $\\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have,\n$$\n\\vec{X} \\equiv \\begin{pmatrix} x \\\\ y \\end{pmatrix}=\\vec{u}+t\\,\\vec{w} \\equiv\n\\begin{pmatrix} u_1 \\\\ u_2 \\end{pmatrix}+t\\begin{pmatrix} w_1 \\\\ w_2 \\end{pmatrix}\\,.\n$$\nHence by comparing the two components we see that $y=\\frac{w_2}{w_1}x+(u_2-\\frac{w_2}{w_1}u_1)\\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\\vec{u}$.)</p>\n" } ]
Question: Connection Between the Traditional Equation of a Line ($y = mx + c$) and the Vector Equation of a Line Answer 1: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 2: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 3: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 4: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 5: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 6: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 7: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 8: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 9: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 10: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 11: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 12: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 13: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 14: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 15: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 16: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 17: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 18: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 19: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 20: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 21: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 22: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 23: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 24: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 25: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 26: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 27: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 28: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p> Answer 29: <p>The vector equation $\vec{X}(t) = \vec{U} + t\vec{W}$ is really a shorthand for two equations: \begin{align} x &amp;= u_x + t w_x \\ y &amp;= u_y + t w_y \\ \end{align} where $\vec{U}$ has components $(u_x,u_y)$ and $\vec{W}$ has components $(w_x,w_y)$. It's a completely different kind of equation from $y = mx + c$, and it's not helpful to look for correspondences between the two.</p> <p>However, for 2D lines, the equations are related: from the first equation above, you can get $$ t = \frac{x - u_x}{w_x} $$ Then, if you substitute this in the second equation, you get $$ y = u_y + \frac{x - u_x}{w_x} w_y = \left( \frac{w_y}{w_x} \right)x + \left( u_y - \frac{u_xw_y}{w_x} \right) $$ and this is in the form $y=mx+c$, as you wanted. So, for example, the slope of the line is $\frac{w_y}{w_x}$. </p> <p>If $w_x = 0$, you get a vertical line (with "infinite" slope) which you can't represent in the form $y=mx+c$. So, the vector form is more general. </p> Answer 30: <p>This is perhaps more of a comment than a full answer, but here is one way to think about this.</p> <p>In vector notation we have $\vec{X}=\vec{u}+t\,\vec{w}$, with $\vec{X}$ pointing to a point on the line parametrised by $t$. In component form we then have, $$ \vec{X} \equiv \begin{pmatrix} x \\ y \end{pmatrix}=\vec{u}+t\,\vec{w} \equiv \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}+t\begin{pmatrix} w_1 \\ w_2 \end{pmatrix}\,. $$ Hence by comparing the two components we see that $y=\frac{w_2}{w_1}x+(u_2-\frac{w_2}{w_1}u_1)\equiv mx+c$. (This would look simpler if we chose to align, say, the $y$-axis with $\vec{u}$.)</p>
Find the number of three elements sets of positive integers $\{a,b,c\}$ such that $a\times b\times c=2310$.
2,080,757
7
https://math.stackexchange.com/questions/2080757/find-the-number-of-three-elements-sets-of-positive-integers-a-b-c-such-tha
<p>Find the number of three elements sets of positive integers $\{a,b,c\}$ such that $a\times b\times c=2310$.</p> <p>My attempt:</p> <p>Now, $2310=2\times 3\times 5\times 7\times 11=a\times b\times c$</p> <p>$a=2^{x_{1}}3^{y_{1}}5^{z_{1}}7^{w_{1}}11^{t_{1}}$</p> <p>$b=2^{x_{2}}3^{y_{2}}5^{z_{2}}7^{w_{2}}11^{t_{2}}$</p> <p>$c=2^{x_{3}}3^{y_{3}}5^{z_{3}}7^{w_{3}}11^{t_{3}}$</p> <p>where $x_{1}+x_{2}+x_{3}=1$; $x_{1},x_{2},x_{3}$ being non-negative integers.</p> <p>Number of solutions of this equation clearly $3$ and similarly for other variables as well.</p> <p>So,number of solutions $=3^5$.</p> <p>But answer given is $40$ .</p> <p>Am I misinterpreting the question</p>
[ { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" }, { "answer_id": 2080785, "score": 10, "text": "<p>$3^5=243$ is the number of <em>ordered triples</em> $\\{a,b,c\\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p>\n\n<p>Condition (i) rules out triples $\\{1,1,2310\\},\\{1,2310,1\\},$ and $\\{2310,1,1\\},$ which leaves 240.</p>\n\n<p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p>\n" }, { "answer_id": 2080794, "score": 3, "text": "<p>I got a combinatorial solution.</p>\n\n<p>$2310=2 \\times 3 \\times 5 \\times 7 \\times 11$</p>\n\n<p>$\\dfrac{\\dbinom{5}{1}\\dbinom{4}{1}\\dbinom{3}{3}}{2!}=10\\\\\n\\dfrac{\\dbinom{5}{2}\\dbinom{3}{1}\\dbinom{2}{2}}{2!}=15\\\\\n\\dbinom{5}{3}\\dbinom{2}{2}=10\\\\\n\\dbinom{5}{4}\\dbinom{1}{1}=5$</p>\n\n<p>Total: $40$</p>\n" }, { "answer_id": 2080799, "score": 2, "text": "<p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p>\n\n<p>$A$ (Without having $1$ as a factor): </p>\n\n<p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p>\n\n<p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p>\n\n<p>$B$ (Having 1 as a factor): </p>\n\n<p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p>\n\n<p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p>\n\n<p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p>\n\n<p>This gives $40$.</p>\n" }, { "answer_id": 2080855, "score": 2, "text": "<p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$\nFirst possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p>\n" }, { "answer_id": 2080777, "score": 1, "text": "<p>Since we are looking for a set, $\\{2310,1,1\\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p>\n\n<p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p>\n\n<p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\\{5,3,2\\}$ there are $\\{3^2, 3^1, 3^0\\}$ options giving a total of $27+9+3+1=\\fbox{40}$ options altogether.</p>\n" } ]
Question: Find the number of three elements sets of positive integers $\{a,b,c\}$ such that $a\times b\times c=2310$. Answer 1: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 2: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 3: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 4: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 5: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 6: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 7: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 8: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 9: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 10: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 11: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 12: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 13: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 14: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 15: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 16: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 17: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 18: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 19: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 20: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 21: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 22: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 23: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 24: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 25: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 26: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 27: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 28: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 29: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 30: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 31: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 32: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 33: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 34: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 35: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 36: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 37: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 38: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 39: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 40: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 41: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 42: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 43: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 44: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 45: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 46: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 47: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 48: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 49: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 50: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 51: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 52: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 53: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 54: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 55: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 56: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 57: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 58: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 59: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 60: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 61: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 62: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 63: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 64: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 65: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 66: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 67: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 68: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 69: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 70: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p> Answer 71: <p>$3^5=243$ is the number of <em>ordered triples</em> $\{a,b,c\}$ such that $abc=2310$. We are asked for the number of <em>sets</em>. So (i) $a,b,c$ must all be different; and (ii) their order is irrelevant.</p> <p>Condition (i) rules out triples $\{1,1,2310\},\{1,2310,1\},$ and $\{2310,1,1\},$ which leaves 240.</p> <p>Condition (ii) means we should divide this by $3!=6$, because each set corresponds to six triples. So we are left with $40$.</p> Answer 72: <p>I got a combinatorial solution.</p> <p>$2310=2 \times 3 \times 5 \times 7 \times 11$</p> <p>$\dfrac{\dbinom{5}{1}\dbinom{4}{1}\dbinom{3}{3}}{2!}=10\\ \dfrac{\dbinom{5}{2}\dbinom{3}{1}\dbinom{2}{2}}{2!}=15\\ \dbinom{5}{3}\dbinom{2}{2}=10\\ \dbinom{5}{4}\dbinom{1}{1}=5$</p> <p>Total: $40$</p> Answer 73: <p>A different approach: You need to get three factors of $2310$ out of your five numbers. There are two ways to do that. Either include a factor of $1$, or don't.</p> <p>$A$ (Without having $1$ as a factor): </p> <p>i) You can have two of the five prime factors as two elements of your set, multiplying the remaining three prime factors together to get the third element. There are $10$ ways to do that.</p> <p>ii) You can have one of the five prime factors as one element of your set, and then multiply the remaining four prime factors together in pairs, to give the remaining two elements of your set. There are $15$ ways to do that.</p> <p>$B$ (Having 1 as a factor): </p> <p>Have a factor of $1$ as one of the elements of your set, and then multiply your five prime factors together to get the two other elements of your set. There are $2$ ways to do that:</p> <p>i) Choose three of them, multiply those, make that your second element of your set, and then multiply the remaining two for the third element. There are $10$ ways to do that. </p> <p>ii) Choose four of them, multiply those, make that the second element of your set, and then have the remaining one as your third element. There are $5$ ways to do that.</p> <p>This gives $40$.</p> Answer 74: <p>$5=3+1+1, 1+2+2.$ So making sets without integer $1,$ first possibility, $10$ ways. Second possibility, $15$ ways. If integer $1$ is taken once $, 5=3+2, 4+1.$ First possibility $10$ ways, second possibility $5$ ways. So total $40$ ways.</p> Answer 75: <p>Since we are looking for a set, $\{2310,1,1\}$ is not an answer - that is, not all the prime factors are gathered in the same factor.</p> <p>Then, to avoid double-counting, we can say, for example, that $11$ always divides $a$ and the largest prime factor $q$ that does not divide $a$, divides $b$.</p> <p>So for $q=7$, we can distribute the smaller primes arbitrarily among the three factors, giving $3^3$ options. Similarly for $q=\{5,3,2\}$ there are $\{3^2, 3^1, 3^0\}$ options giving a total of $27+9+3+1=\fbox{40}$ options altogether.</p>
Is there a &quot;simple&quot; way of proving Stirlings formula?
2,081,194
7
https://math.stackexchange.com/questions/2081194/is-there-a-simple-way-of-proving-stirlings-formula
<p>Is there any way to derive Stirlings formula that only requires some undergraduate knowledge of calculus, real analysis and perhaps some identitets involving the gamma function, maybe Wallis product, and things along those lines? If not, and I know this is a rather vague question, what is the simplest but still sufficiently rigorous way of deriving it? I had a look at <a href="https://math.stackexchange.com/questions/94722/stirlings-formula-proof">Stirling's formula: proof?</a> but the comments seems quite messy. Wikipedia was not particularly helpful either since I have not learned about Laplace's method, Bernoulli numbers or the trapezoidal rule. </p>
[ { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2081217, "score": 6, "text": "<p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p>\n\n<p>$$\\ln(n!)=\\sum_{k=1}^n\\ln(n)&gt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p>\n\n<p>$$\\frac12\\ln(n)+\\ln((n-1)!)=\\frac12\\left(\\ln(n!)+\\ln((n-1)!)\\right)\\\\=\\frac12\\sum_{k=1}^n\\ln(n)+\\frac12\\sum_{k=1}^{n-1}\\ln(n)&lt;\\int_1^n\\ln(x)\\ dx=n\\ln n-n+1$$</p>\n\n<p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p>\n\n<p>$$e\\left(\\frac ne\\right)^n&lt;n!&lt;\\sqrt{n+1}\\left(\\frac{n+1}e\\right)^n$$</p>\n\n<p>The lower bound may then be improved using</p>\n\n<p>$$n!=n(n-1)!&lt;e\\sqrt n\\left(\\frac ne\\right)^n$$</p>\n\n<hr>\n\n<p>Stronger results that follow this line of thought may be derived by the much better <a href=\"https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula\" rel=\"nofollow noreferrer\">Euler-Maclaurin formula</a>.</p>\n" }, { "answer_id": 2081199, "score": 3, "text": "<p>You may find a \"simple\" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\\int_{-\\infty}^\\infty e^{-s^2} ds$$</p>\n\n<p><a href=\"https://i.stack.imgur.com/nJDuv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/nJDuv.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/fyhuz.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/fyhuz.png\" alt=\"enter image description here\"></a></p>\n" } ]
Question: Is there a &quot;simple&quot; way of proving Stirlings formula? Answer 1: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 2: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 3: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 4: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 5: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 6: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 7: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 8: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 9: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 10: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 11: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 12: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 13: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 14: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 15: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 16: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 17: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 18: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 19: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 20: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 21: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 22: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 23: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 24: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 25: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 26: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 27: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 28: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p> Answer 29: <p>As an extremely simple derivation of bounds Stirling's approximation sets, notice that</p> <p>$$\ln(n!)=\sum_{k=1}^n\ln(n)&gt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>This inequality is simply a right Riemann sum of a monotonically increasing function. Similarly,</p> <p>$$\frac12\ln(n)+\ln((n-1)!)=\frac12\left(\ln(n!)+\ln((n-1)!)\right)\\=\frac12\sum_{k=1}^n\ln(n)+\frac12\sum_{k=1}^{n-1}\ln(n)&lt;\int_1^n\ln(x)\ dx=n\ln n-n+1$$</p> <p>Which is the trapezoidal sum (average of left/right Riemann sums). From each of these, you easily get that</p> <p>$$e\left(\frac ne\right)^n&lt;n!&lt;\sqrt{n+1}\left(\frac{n+1}e\right)^n$$</p> <p>The lower bound may then be improved using</p> <p>$$n!=n(n-1)!&lt;e\sqrt n\left(\frac ne\right)^n$$</p> <hr> <p>Stronger results that follow this line of thought may be derived by the much better <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin formula</a>.</p> Answer 30: <p>You may find a "simple" or better well written proof in the book Real Analysis and Foundations from Steven G. Krantz (p. 277 and 278) which you may find. As far as I can see, he uses the Gamma function, l'Hopital's rule and the evaluation in polar coordinates of $$\int_{-\infty}^\infty e^{-s^2} ds$$</p> <p><a href="https://i.stack.imgur.com/nJDuv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nJDuv.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/fyhuz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fyhuz.png" alt="enter image description here"></a></p>
Integral $\int \frac{dx}{\sin^2 (x) + \sin(2x)}$
2,081,669
7
https://math.stackexchange.com/questions/2081669/integral-int-fracdx-sin2-x-sin2x
<p>We have to do the following integration $$\int \frac{dx}{\sin^2 (x) + \sin(2x)}$$</p> <p>I tried as</p> <p>In the numerator, $1 = \sin^2(x)+\sin(2x)+\cos^2(x)-\sin(2x)$</p> <p>But now how to proceed?</p>
[ { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" }, { "answer_id": 2081686, "score": 5, "text": "<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{\\sec^2(x)}{\\sin^2(x)\\sec^2(x)+2\\sin(x)\\cos(x)\\sec^2(x)}dx\\\\\n&amp;=\\int\\frac{\\sec^2(x)}{\\tan^2(x)+2\\tan(x)}dx\n\\end{align}</p>\n\n<p>Letting $u:=\\tan(x)$, then $du=\\sec^2(x)dx$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{du}{u^2+2u}\\\\\n&amp;=\\int\\frac{du}{(u+1)^2-1}\n\\end{align}</p>\n\n<p>Letting $z:=u+1$, then $dz = du$ gives</p>\n\n<p>\\begin{align}\n\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)}&amp;=\\int\\frac{dz}{z^2-1}\\\\\n&amp;=-\\text{artanh }(z)+C\\\\\n&amp;=-\\text{artanh }(u+1)+C\\\\\n\\end{align}</p>\n\n<p>Reversing the final substitution gives</p>\n\n<blockquote>\n <p>$$\\int\\frac{dx}{\\sin^2(x)+\\sin(2x)} = -\\text{artanh }(\\tan(x)+1)+C$$</p>\n</blockquote>\n" }, { "answer_id": 2081677, "score": 2, "text": "<p>HINT:</p>\n\n<p>$$\\dfrac1{\\sin^2x+\\sin2x}=\\dfrac{\\sec^2x}{\\tan^2x+2\\tan x}$$</p>\n" }, { "answer_id": 2081680, "score": 2, "text": "<p>We can transform our integral as: $$I =\\int \\frac {1}{\\sin^2 x +\\sin 2x} dx =\\int -(\\frac {-1}{2\\cot x+1})\\mathrm {cosec}^2 x dx $$ Using $v=\\cot x $, this can be easily solved. Hope it helps. </p>\n" }, { "answer_id": 2081681, "score": 1, "text": "<p>First use the fact that $\\sin(2x) = 2\\sin(x)\\cos(x)$ to obtain\n$$\\int \\frac{1}{\\sin^2(x) + 2\\cos(x)\\sin(x)}dx$$\nDivide numerator and denominator by $\\sin^2(x)$:\n$$\\int \\frac{\\frac{1}{\\sin^2(x)}}{1 + 2\\frac{\\cos(x)}{\\sin(x)}} = \\int \\frac{\\csc^2(x)}{1 + 2\\cot(x)}dx$$\nThen you can use the substitution $u = 2\\cot(x)+1$; $du = -2\\csc(x)dx$:\n$$-\\frac{1}{2}\\int\\frac{1}{u}du = -\\frac{\\ln(u)}{2}+c$$\nAnd substitute $u$ for $2\\cot(x)+1$ again.</p>\n" } ]
Question: Integral $\int \frac{dx}{\sin^2 (x) + \sin(2x)}$ Answer 1: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 2: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 3: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 4: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 5: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 6: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 7: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 8: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 9: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 10: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 11: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 12: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 13: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 14: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 15: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 16: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 17: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 18: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 19: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 20: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 21: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 22: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 23: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 24: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 25: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 26: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 27: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 28: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 29: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 30: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 31: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 32: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 33: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 34: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 35: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 36: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 37: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 38: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 39: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 40: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 41: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 42: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 43: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 44: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 45: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 46: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 47: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 48: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 49: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 50: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 51: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 52: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 53: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 54: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 55: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 56: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p> Answer 57: <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{\sec^2(x)}{\sin^2(x)\sec^2(x)+2\sin(x)\cos(x)\sec^2(x)}dx\\ &amp;=\int\frac{\sec^2(x)}{\tan^2(x)+2\tan(x)}dx \end{align}</p> <p>Letting $u:=\tan(x)$, then $du=\sec^2(x)dx$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{du}{u^2+2u}\\ &amp;=\int\frac{du}{(u+1)^2-1} \end{align}</p> <p>Letting $z:=u+1$, then $dz = du$ gives</p> <p>\begin{align} \int\frac{dx}{\sin^2(x)+\sin(2x)}&amp;=\int\frac{dz}{z^2-1}\\ &amp;=-\text{artanh }(z)+C\\ &amp;=-\text{artanh }(u+1)+C\\ \end{align}</p> <p>Reversing the final substitution gives</p> <blockquote> <p>$$\int\frac{dx}{\sin^2(x)+\sin(2x)} = -\text{artanh }(\tan(x)+1)+C$$</p> </blockquote> Answer 58: <p>HINT:</p> <p>$$\dfrac1{\sin^2x+\sin2x}=\dfrac{\sec^2x}{\tan^2x+2\tan x}$$</p> Answer 59: <p>We can transform our integral as: $$I =\int \frac {1}{\sin^2 x +\sin 2x} dx =\int -(\frac {-1}{2\cot x+1})\mathrm {cosec}^2 x dx $$ Using $v=\cot x $, this can be easily solved. Hope it helps. </p> Answer 60: <p>First use the fact that $\sin(2x) = 2\sin(x)\cos(x)$ to obtain $$\int \frac{1}{\sin^2(x) + 2\cos(x)\sin(x)}dx$$ Divide numerator and denominator by $\sin^2(x)$: $$\int \frac{\frac{1}{\sin^2(x)}}{1 + 2\frac{\cos(x)}{\sin(x)}} = \int \frac{\csc^2(x)}{1 + 2\cot(x)}dx$$ Then you can use the substitution $u = 2\cot(x)+1$; $du = -2\csc(x)dx$: $$-\frac{1}{2}\int\frac{1}{u}du = -\frac{\ln(u)}{2}+c$$ And substitute $u$ for $2\cot(x)+1$ again.</p>
Derivate by $x^2$
2,081,826
7
https://math.stackexchange.com/questions/2081826/derivate-by-x2
<p>$$\frac{d}{dx^2}x=\frac{1}{2x}$$</p> <p>I can't wrap my head around that. It appeared in a physics context, i.e. it may not be mathematically rigorous.</p> <p>Thanks for some explanation.</p> <p>Wolfram Alpha tells me I cannot differentiate by $x^2$: <a href="http://www.wolframalpha.com/input/?i=derivate&amp;rawformassumption=%7B%22F%22,%20%22Derivative%22,%20%22derivativefunction%22%7D%20-%3E%22x%22&amp;rawformassumption=%7B%22F%22,%20%22Derivative%22,%20%22derivativevariable%22%7D%20-%3E%22x%5E2%22&amp;rawformassumption=%7B%22C%22,%20%22derivate%22%7D%20-%3E%20%7B%22Calculator%22%7D" rel="nofollow noreferrer">Invalid value</a></p>
[ { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" }, { "answer_id": 2081829, "score": 10, "text": "<p>Call your variable $\\;t:=x^2\\implies\n \\sqrt t=x\\;,\\;\\;t\\ge0\\;$ , so you want</p>\n\n<p>$$\\frac d{dx^2}(x)=\\frac d{dt}(\\sqrt t)=\\frac1{2\\sqrt t}=\\frac1{2x}$$</p>\n" }, { "answer_id": 2081838, "score": 4, "text": "<p>Let $f(x^2)=x$. </p>\n\n<p>Hence, for $x&gt;0$ $$f'(x^2)=(\\sqrt{x^2})'_{x^2}=\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n\n<p>If $x&lt;0$ so $$f'(x^2)=-(\\sqrt{x^2})'_{x^2}=-\\frac{1}{2\\sqrt{x^2}}=\\frac{1}{2x}$$</p>\n" }, { "answer_id": 2081846, "score": 2, "text": "<p>$$\\large \\frac{df(x)}{dg(x)}=\\large \\frac{\\frac{df(x)}{dx}}{\\frac{dg(x)}{dx}}=\\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$\n$$\\frac{df}{dg}=\\frac{d(x^4+x^2+1)}{d(x^2)}=\\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\\frac{df}{du}$ so first rewrite $f(x)$ by $u$ \n$$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\\\ \\to \\frac{df}{du}=2u+1=2x^2+1$$ \nanother example :$$f(x)=sinx ,g(x)=tan x \\\\\\frac{df}{dg}=\\frac{cos x}{tan^2 x+1}=sec^3x$$</p>\n\n<p>another example : $$\\frac{dx}{d(lnx)}=?\\\\=\\frac{1}{\\frac{1}{x}}=x$$</p>\n" }, { "answer_id": 2081837, "score": 0, "text": "<p>$$\\frac{d(x)}{dx} \\cdot \\frac{dx}{dx^2}=1 \\cdot \\frac{1}{dx^2/{dx}}=\\frac{1}{2x}.$$</p>\n" } ]
Question: Derivate by $x^2$ Answer 1: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 2: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 3: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 4: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 5: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 6: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 7: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 8: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 9: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 10: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 11: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 12: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 13: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 14: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 15: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 16: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 17: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 18: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 19: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 20: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 21: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 22: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 23: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 24: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 25: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 26: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 27: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 28: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 29: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 30: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 31: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 32: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 33: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 34: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 35: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 36: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 37: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 38: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 39: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 40: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 41: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 42: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 43: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 44: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 45: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 46: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 47: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 48: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 49: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 50: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 51: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 52: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 53: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 54: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 55: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 56: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p> Answer 57: <p>Call your variable $\;t:=x^2\implies \sqrt t=x\;,\;\;t\ge0\;$ , so you want</p> <p>$$\frac d{dx^2}(x)=\frac d{dt}(\sqrt t)=\frac1{2\sqrt t}=\frac1{2x}$$</p> Answer 58: <p>Let $f(x^2)=x$. </p> <p>Hence, for $x&gt;0$ $$f'(x^2)=(\sqrt{x^2})'_{x^2}=\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> <p>If $x&lt;0$ so $$f'(x^2)=-(\sqrt{x^2})'_{x^2}=-\frac{1}{2\sqrt{x^2}}=\frac{1}{2x}$$</p> Answer 59: <p>$$\large \frac{df(x)}{dg(x)}=\large \frac{\frac{df(x)}{dx}}{\frac{dg(x)}{dx}}=\frac{f'(x)}{g'(x)}$$ for example $$f(x)=x^4+x^2+1 ,g(x)=x^2 $$ $$\frac{df}{dg}=\frac{d(x^4+x^2+1)}{d(x^2)}=\frac{4x^3+2x}{2x}=2x^2+1$$ if you take $u=x^2$ you need to find $\frac{df}{du}$ so first rewrite $f(x)$ by $u$ $$f(x)=(x^2)^2+x^2+1=u^2+u+1 \\ \to \frac{df}{du}=2u+1=2x^2+1$$ another example :$$f(x)=sinx ,g(x)=tan x \\\frac{df}{dg}=\frac{cos x}{tan^2 x+1}=sec^3x$$</p> <p>another example : $$\frac{dx}{d(lnx)}=?\\=\frac{1}{\frac{1}{x}}=x$$</p> Answer 60: <p>$$\frac{d(x)}{dx} \cdot \frac{dx}{dx^2}=1 \cdot \frac{1}{dx^2/{dx}}=\frac{1}{2x}.$$</p>
Why are the integral curves of this vector field easy to find?
2,082,385
7
https://math.stackexchange.com/questions/2082385/why-are-the-integral-curves-of-this-vector-field-easy-to-find
<p>I'm reading a book of differential geometry that states that it's <em>easy</em> to get the integral curves of this vector field:</p> <blockquote> <p>$$X(x,y)=(x^2-y^2,2xy)$$</p> </blockquote> <p>But proceeding the way the book says, first I have to take that equation as </p> <p>$$\frac{dx}{dt}=x^2-y^2$$ $$\frac{dy}{dt}=2xy$$</p> <p>But from that I don't know what more to do to proceed. I know that the integral curves must be circles of center $(0,t)$, and the vector field seems as this</p> <p><a href="https://i.stack.imgur.com/iq2DQ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/iq2DQ.jpg" alt="enter image description here"></a></p>
[ { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" }, { "answer_id": 2082450, "score": 6, "text": "<p>Indentifying $(x,y)$ with $z=x + y \\, \\mathrm{i}$ the differential equation reads $$\\frac{\\partial z}{\\partial t}=z^2.$$ Rewriting this we find $$\\frac{-\\partial z^{-1}}{\\partial t} = \\frac{\\partial z}{\\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \\textrm{ or } z = -\\frac1{t+z_0}$$ for some fixed initial value $z_0 \\in \\mathbb{C}$. For $t \\in \\mathbb{R}$ these describe circular trajectories as expected.</p>\n\n<p><strong>Alternative 1</strong>: transform the vector field under the inversion</p>\n\n<p>$$\\sigma(x, y) = \\left(\\frac{x}{x^2+y^2}, \\frac{y}{x^2+y^2}\\right).$$</p>\n\n<p>This map is an involution ($\\sigma^2 = \\mathrm{id}$) and its Jacobian is $$D\\sigma_{(x,y)}=\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix}.$$</p>\n\n<p>Therefore it maps the element $$\\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix}$$ at point $(x, y)$ to the element</p>\n\n<p>$$\\frac{1}{(x^2+y^2)^2}\\begin{pmatrix} y^2-x^2 &amp; -2xy \\\\ -2xy &amp; x^2-y^2 \\end{pmatrix} \\begin{pmatrix}x^2 - y^2 \\\\ 2xy \\end{pmatrix} = \\begin{pmatrix} -1 \\\\ 0 \\end{pmatrix}$$ at point $\\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\\sigma$ to find the trajectories of the original vector field.</p>\n\n<p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\\begin{cases} x = -R \\sin \\alpha \\\\ y = R + R \\cos \\alpha \\end{cases}$$ for some function $\\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\\frac{\\partial \\alpha}{\\partial t} = 2R \\, (1 + \\cos\\alpha).$$ One solution to this equation is $\\alpha(t) = 2 \\arctan(2R \\, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p>\n\n<p>$$\\left(\\frac{-t}{t^2 + (2R)^{-2}}, \\frac{(2R)^{-1}}{t^2+(2R)^{-2}}\\right).$$</p>\n" }, { "answer_id": 2082560, "score": 5, "text": "<p>Note that the system implies \n$$y'(x)=\\frac{2xy(x)}{x^2-y(x)^2}.$$\n(This is the approach mentioned by @Sameh Shenawy in the comments.) \nThis can be solved by standard methods. \nFor example, setting $y(x)=xf(x)$ we arrive at the separable differential equation \n$$f'(x) = \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$\nThe relevant integral may be done by a partial fraction expansion. </p>\n\n<p>In my opinion the method put forward by @WimC is much preferable. </p>\n\n<p><strong>Addendum</strong></p>\n\n<p>The gory detail ... \n\\begin{align*}\nf'(x) &amp;= \\frac{1}{x} \\frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\\\\n\\frac{1-f^2}{f(1+f^2)}df &amp;= \\frac{dx}{x} \\\\\n\\frac{(1+f^2)-2f^2}{f(1+f^2)} df \n\t&amp;= \\frac{dx}{x} \\\\\n\\int\\left(\\frac{1}{f} - \\frac{2f}{1+f^2}\\right)df\n\t&amp;= \\int\\frac{dx}{x} \\\\\n\\ln f - \\ln(1+f^2) &amp;= \\ln\\frac{x}{2r} \\\\\n\\ln\\frac{f}{1+f^2} &amp;= \\ln\\frac{x}{2r} \\\\\n\\frac{f}{1+f^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y/x}{1+y^2/x^2} &amp;= \\frac{x}{2r} \\\\\n\\frac{y}{x^2+y^2} &amp;= \\frac{1}{2r} \\\\\nx^2+y^2-2ry &amp;= 0 \\\\\nx^2 + y^2 - 2ry + r^2 &amp;= r^2 \\\\\nx^2 + (y-r)^2 &amp;= r^2.\n\\end{align*}\n(The integration constant was chosen to have the form $-\\ln 2r$ for convenience.) </p>\n" }, { "answer_id": 3527341, "score": 0, "text": "<p><span class=\"math-container\">$$X(x,y)=(x^2-y^2,2xy)$$</span>\n<span class=\"math-container\">$$\n\\begin{cases}\n\\dfrac{dx}{dt}=x^2-y^2 \\\\\n\\dfrac{dy}{dt}=2xy\n\\end{cases}\n$$</span>\n<span class=\"math-container\">$$y'=\\frac{2xy}{x^2-y^2}.$$</span>\nConsider <span class=\"math-container\">$x'$</span> instead of <span class=\"math-container\">$y'$</span>:\n<span class=\"math-container\">$${x^2-y^2}={2x}x'(y)y$$</span>\nSubstitute <span class=\"math-container\">$x^2=z$</span>\n<span class=\"math-container\">$$z-y^2=z'y$$</span>\nIt's a first order linear DE:\n<span class=\"math-container\">$$z'y-z=-y^2$$</span>\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)'=-1$$</span>\nIntegrate:\n<span class=\"math-container\">$$ \\left ( \\frac z y\\right)=-y+C$$</span>\nFinally:\n<span class=\"math-container\">$$ \\boxed {x^2 +y^2=Cy}$$</span></p>\n" } ]
Question: Why are the integral curves of this vector field easy to find? Answer 1: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 2: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 3: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 4: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 5: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 6: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 7: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 8: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 9: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 10: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 11: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 12: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 13: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 14: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 15: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 16: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 17: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 18: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 19: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 20: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 21: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 22: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 23: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 24: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 25: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 26: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 27: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 28: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 29: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 30: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 31: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 32: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 33: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 34: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 35: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 36: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 37: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 38: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 39: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 40: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 41: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 42: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p> Answer 43: <p>Indentifying $(x,y)$ with $z=x + y \, \mathrm{i}$ the differential equation reads $$\frac{\partial z}{\partial t}=z^2.$$ Rewriting this we find $$\frac{-\partial z^{-1}}{\partial t} = \frac{\partial z}{\partial t} z^{-2} = 1$$ and so $$-z^{-1}= t+z_0 \textrm{ or } z = -\frac1{t+z_0}$$ for some fixed initial value $z_0 \in \mathbb{C}$. For $t \in \mathbb{R}$ these describe circular trajectories as expected.</p> <p><strong>Alternative 1</strong>: transform the vector field under the inversion</p> <p>$$\sigma(x, y) = \left(\frac{x}{x^2+y^2}, \frac{y}{x^2+y^2}\right).$$</p> <p>This map is an involution ($\sigma^2 = \mathrm{id}$) and its Jacobian is $$D\sigma_{(x,y)}=\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix}.$$</p> <p>Therefore it maps the element $$\begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix}$$ at point $(x, y)$ to the element</p> <p>$$\frac{1}{(x^2+y^2)^2}\begin{pmatrix} y^2-x^2 &amp; -2xy \\ -2xy &amp; x^2-y^2 \end{pmatrix} \begin{pmatrix}x^2 - y^2 \\ 2xy \end{pmatrix} = \begin{pmatrix} -1 \\ 0 \end{pmatrix}$$ at point $\sigma(x,y)$. The trajectories of the transformed vector field are easy to find. Then apply the inversion $\sigma$ to find the trajectories of the original vector field.</p> <p><strong>Alternative 2</strong>: Since you know that the trajectories are circular start with the ansatz $$\begin{cases} x = -R \sin \alpha \\ y = R + R \cos \alpha \end{cases}$$ for some function $\alpha$ of $t$. Then the given differential equations are consistent and both lead to $$\frac{\partial \alpha}{\partial t} = 2R \, (1 + \cos\alpha).$$ One solution to this equation is $\alpha(t) = 2 \arctan(2R \, t)$. Substituting this in the expressions for $x$ and $y$ gives the trajectory</p> <p>$$\left(\frac{-t}{t^2 + (2R)^{-2}}, \frac{(2R)^{-1}}{t^2+(2R)^{-2}}\right).$$</p> Answer 44: <p>Note that the system implies $$y'(x)=\frac{2xy(x)}{x^2-y(x)^2}.$$ (This is the approach mentioned by @Sameh Shenawy in the comments.) This can be solved by standard methods. For example, setting $y(x)=xf(x)$ we arrive at the separable differential equation $$f'(x) = \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2}.$$ The relevant integral may be done by a partial fraction expansion. </p> <p>In my opinion the method put forward by @WimC is much preferable. </p> <p><strong>Addendum</strong></p> <p>The gory detail ... \begin{align*} f'(x) &amp;= \frac{1}{x} \frac{f(x)(1+f(x)^2)}{1-f(x)^2} \\ \frac{1-f^2}{f(1+f^2)}df &amp;= \frac{dx}{x} \\ \frac{(1+f^2)-2f^2}{f(1+f^2)} df &amp;= \frac{dx}{x} \\ \int\left(\frac{1}{f} - \frac{2f}{1+f^2}\right)df &amp;= \int\frac{dx}{x} \\ \ln f - \ln(1+f^2) &amp;= \ln\frac{x}{2r} \\ \ln\frac{f}{1+f^2} &amp;= \ln\frac{x}{2r} \\ \frac{f}{1+f^2} &amp;= \frac{x}{2r} \\ \frac{y/x}{1+y^2/x^2} &amp;= \frac{x}{2r} \\ \frac{y}{x^2+y^2} &amp;= \frac{1}{2r} \\ x^2+y^2-2ry &amp;= 0 \\ x^2 + y^2 - 2ry + r^2 &amp;= r^2 \\ x^2 + (y-r)^2 &amp;= r^2. \end{align*} (The integration constant was chosen to have the form $-\ln 2r$ for convenience.) </p> Answer 45: <p><span class="math-container">$$X(x,y)=(x^2-y^2,2xy)$$</span> <span class="math-container">$$ \begin{cases} \dfrac{dx}{dt}=x^2-y^2 \\ \dfrac{dy}{dt}=2xy \end{cases} $$</span> <span class="math-container">$$y'=\frac{2xy}{x^2-y^2}.$$</span> Consider <span class="math-container">$x'$</span> instead of <span class="math-container">$y'$</span>: <span class="math-container">$${x^2-y^2}={2x}x'(y)y$$</span> Substitute <span class="math-container">$x^2=z$</span> <span class="math-container">$$z-y^2=z'y$$</span> It's a first order linear DE: <span class="math-container">$$z'y-z=-y^2$$</span> <span class="math-container">$$ \left ( \frac z y\right)'=-1$$</span> Integrate: <span class="math-container">$$ \left ( \frac z y\right)=-y+C$$</span> Finally: <span class="math-container">$$ \boxed {x^2 +y^2=Cy}$$</span></p>
How do I find all all the subgroups of a group?
2,082,441
7
https://math.stackexchange.com/questions/2082441/how-do-i-find-all-all-the-subgroups-of-a-group
<p>I'm failing to see the logic behind what makes up a subgroup. I understand the requirements of a subgroup (associativity, identity etc....) but I don't actually know how to find the subgroups. </p> <p>I think it has something to do with "getting back to the identity", but I may be wrong? I know that the identity is a subgroup and the whole group is a subgroup. That's all.</p> <p>Any help would be appreciated!</p>
[ { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" }, { "answer_id": 2082466, "score": 9, "text": "<p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p>\n\n<p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p>\n\n<p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p>\n\n<p>For abelian groups, it becomes simpler - you can take any:</p>\n\n<p>$$a^nb^m$$</p>\n\n<p>with $n,m$ integers, because $ab=ba$.</p>\n\n<p>The simplest is to start with one elements: $\\langle a\\rangle=\\{1,a,a^2,a^3,\\dots\\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\\{1,a,a^{-1},a^2,a^{-2},\\dots\\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\\neq b$ such that $\\langle a\\rangle = \\langle b\\rangle$.)</p>\n\n<p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p>\n\n<p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p>\n" }, { "answer_id": 2082456, "score": 8, "text": "<p>Edit: Written this assuming you've taken a course on group theory.</p>\n\n<p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href=\"https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)\" rel=\"noreferrer\">Lagrange's theorem</a> to find possible orders of subgroups.</p>\n\n<p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p>\n\n<p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p>\n\n<p>Consider the dihedral group $D_n = \\langle r,s \\mid r^n = s^2 = e, srs = r^{-1} \\rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. \nOne such example is the subgroup $\\langle r\\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\\mathbb{Z}_n$.</p>\n" }, { "answer_id": 2082462, "score": 4, "text": "<p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\\mathbb{Z}/n$ for various $n$; see for example this question: <a href=\"https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1\">For group $\\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are\n$$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\\times D_3, Z_3×Z_3, (Z_3\\times Z_3)⋊ Z_2,\nD_3 \\times Z_3, Z_2 \\times Z_2 \\times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \\times Z_4, D_4, D_5, A_4, D_4 \\times Z_2, Z_5 ⋊ Z_4,\nA_4 \\times Z_2, S_4, (Z_3 \\times Z_3) ⋊ Z_4, S_4 \\times Z_2, A_5, S_5, A_6, S_6 $$\nThis has been answered <a href=\"https://math.stackexchange.com/questions/446097/subgroups-of-s-6\">here</a>.</p>\n\n<p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p>\n" }, { "answer_id": 2082526, "score": 2, "text": "<p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \\leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\\langle H_{1}, H_{2} \\rangle$, which is the subgroup generated by the elements of $H_{1} \\cup H_{2}$. </p>\n\n<p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \\leq K$ and $K \\leq M$, then $H \\leq M$), and anti-symmetric (if $H \\leq K$ and $K \\leq H$, then $H \\cong K$). So if you have two subgroups $H, K \\leq G$ such that $H \\not \\leq K$ and $K \\not \\leq H$, the join $\\langle H, K \\rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>\n" } ]
Question: How do I find all all the subgroups of a group? Answer 1: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 2: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 3: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 4: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 5: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 6: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 7: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 8: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 9: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 10: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 11: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 12: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 13: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 14: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 15: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 16: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 17: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 18: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 19: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 20: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 21: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 22: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 23: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 24: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 25: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 26: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 27: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 28: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 29: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 30: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 31: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 32: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 33: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 34: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 35: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 36: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 37: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 38: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 39: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 40: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 41: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 42: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 43: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 44: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 45: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 46: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 47: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 48: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 49: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 50: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 51: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 52: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 53: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 54: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 55: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 56: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p> Answer 57: <p>The most basic way to figure out subgroups is to take a subset of the elements, and then find all products of powers of those elements.</p> <p>So, say you have two elements $a,b$ in your group, then you need to consider all strings of $a,b$, yielding $$1,a,b,a^2,ab,ba,b^2,a^3,aba,ba^2,a^2b,ab^2,bab,b^3,...$$</p> <p>If your group is not finite, you also have to consider negative powers. Or you can start with three members.</p> <p>For abelian groups, it becomes simpler - you can take any:</p> <p>$$a^nb^m$$</p> <p>with $n,m$ integers, because $ab=ba$.</p> <p>The simplest is to start with one elements: $\langle a\rangle=\{1,a,a^2,a^3,\dots\}$ is a subgroup for any $a$ if $G$ is finite, otherwise, you need to take $\{1,a,a^{-1},a^2,a^{-2},\dots\}$. In the case of cyclic groups, these are all the subgroups (although you get duplicates - there are $a\neq b$ such that $\langle a\rangle = \langle b\rangle$.)</p> <p>Ultimately, you build from these, but in the general case, we are talking a hard problem.</p> <p>You eventually learn to do certain types of deductions, but you should really try this brute force technique and use the understandings you have of the individual groups to forge ahead.</p> Answer 58: <p>Edit: Written this assuming you've taken a course on group theory.</p> <p>Finding <em>all</em> subgroups of large finite groups is in general a very difficult problem. Usually, I'd start with <a href="https://en.wikipedia.org/wiki/Lagrange&#39;s_theorem_(group_theory)" rel="noreferrer">Lagrange's theorem</a> to find possible orders of subgroups.</p> <p>Next, you know that every subgroup has to contain the identity element. Then you can start to work out orders of elements contained in possible subgroups - again noting that orders of elements need to divide the order of the group.</p> <p>Since you've added the tag for cyclic groups I'll give an example that contains cyclic groups.</p> <p>Consider the dihedral group $D_n = \langle r,s \mid r^n = s^2 = e, srs = r^{-1} \rangle$ where $e$ is the identity. It has order $2n$ and so the order of subgroups must divide $2n$. One such example is the subgroup $\langle r\rangle = e, r, r^2, r^3, ..., r^{n-1} $ which is clearly isomorphic to $\mathbb{Z}_n$.</p> Answer 59: <p>The question is very hard in general, so one may want to consider some examples. Almost all popular examples have been discussed here at MSE; a starter is to find all subgroups of the group of units of the ring $\mathbb{Z}/n$ for various $n$; see for example this question: <a href="https://math.stackexchange.com/questions/508770/for-group-mathbbz-18-how-do-i-find-all-subgroups?rq=1">For group $\mathbb{Z_{18}^*}$, how do I find all subgroups</a>. For more demanding examples one has to do more. A nice example is to find all subgroups of $S_6$, which are $$ D_6, D_3, D_2, Z_6, Z_3, Z_2, D_3\times D_3, Z_3×Z_3, (Z_3\times Z_3)⋊ Z_2, D_3 \times Z_3, Z_2 \times Z_2 \times Z_2, D_3 ≀ Z_2, Z_4, Z_5, D_4, Z_2 \times Z_4, D_4, D_5, A_4, D_4 \times Z_2, Z_5 ⋊ Z_4, A_4 \times Z_2, S_4, (Z_3 \times Z_3) ⋊ Z_4, S_4 \times Z_2, A_5, S_5, A_6, S_6 $$ This has been answered <a href="https://math.stackexchange.com/questions/446097/subgroups-of-s-6">here</a>.</p> <p>Also instructive is to study subgroups of matrices in $GL_n(K)$, for example the subgroup $SL_n(K)$ of invertible $n\times n$-matrices with determinant $1$, or $O_n(K)$ the subgroup of orthogonal matrices, or $Sp_n(K)$ and so on.</p> Answer 60: <p>Cauchy's Theorem may be more useful than Lagrange's Theorem, when dealing with finite groups. Cauchy's Theorem states that for every prime $p$ dividing $|G|$, there exists a subgroup $H \leq G$ of order $p$. So start with the cyclic subgroups of prime order. Then for any two cyclic groups $H_{1}, H_{2}$ of prime order, you can obtain a new subgroup by taking the join $\langle H_{1}, H_{2} \rangle$, which is the subgroup generated by the elements of $H_{1} \cup H_{2}$. </p> <p>Now the subgroups of a group $G$ are ordered by the subgroup relation, which is a partial order. That is, the subgroup relation is reflexive (a group is a subgroup of itself), transitive (if $H \leq K$ and $K \leq M$, then $H \leq M$), and anti-symmetric (if $H \leq K$ and $K \leq H$, then $H \cong K$). So if you have two subgroups $H, K \leq G$ such that $H \not \leq K$ and $K \not \leq H$, the join $\langle H, K \rangle$ is another subgroup of $G$. Chapter 2, Section 5 in Dummit and Foote has a good exposition on the Lattice of Subgroups, which provides a visual graph called a Hasse Diagram to help study the subgroup partial order. I would direct you there for more reading. </p>
Limit problem involving bijective function
2,082,602
7
https://math.stackexchange.com/questions/2082602/limit-problem-involving-bijective-function
<p>Let $f: \Bbb N^{\star}\to \Bbb N^{\star}$ a bijective functions such that exists $$\lim _{n \to \infty} {\frac {f(n)} {n}}.$$ Find the value of this limit.</p> <p>I noticed that, if $f(n)=n$, then the limit is $1$. I couldn't make more progress. Can you help me?</p>
[ { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" }, { "answer_id": 2082669, "score": 4, "text": "<p>Let <span class=\"math-container\">$f$</span> be permutation of <span class=\"math-container\">$\\mathbb{N^*}$</span>. Suppose the inequality <span class=\"math-container\">$f(n) \\leq n$</span> holds for only a finite set of integers <span class=\"math-container\">$A$</span>. Let <span class=\"math-container\">$m = \\max A$</span>, then <span class=\"math-container\">$f \\left( [m + 1,\\infty) \\cap \\mathbb{N} \\right ) \\subset [m + 2,\\infty) \\cap \\mathbb{N}$</span>, thus <span class=\"math-container\">$f^{-1} \\big ( [\\![ 1,m+1 ]\\!] \\big ) \\subset [\\![ 1,m ]\\!]$</span>, which is not possible.</p>\n<p>Thus <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\leq 1$</span> holds infinitely many times.</p>\n<p>Applying the same reasoning to <span class=\"math-container\">$f^{-1}$</span> we get : <span class=\"math-container\">$\\frac{f^{-1}(n)}{n} \\leq 1 $</span> <em>i.e</em> <span class=\"math-container\">$f(n) \\over {n}$</span> <span class=\"math-container\">$ \\geq 1$</span> holds infinitely many times.</p>\n<p>Since the limit exists, <span class=\"math-container\">$\\lim \\frac{f(n)}{n} = 1$</span> .</p>\n" }, { "answer_id": 2082649, "score": 3, "text": "<p>Let $L = \\lim_{n \\to \\infty} \\frac{f(n)}{n}$. Note $L \\geq 0$. Consider the cases:</p>\n\n<p>$L &gt; 1$:</p>\n\n<p>Let $\\epsilon &gt; 0$ be such that $L &gt; 1 + \\epsilon$. Then $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &gt; 1 + \\epsilon \\quad \\forall n \\geq N$ $\\iff f(n) &gt; (1+\\epsilon)n$.</p>\n\n<p>Take $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1 + \\epsilon)M &gt; M + 1$. Then $\\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \\dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \\dots, n_{M+1} \\leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p>\n\n<p>$0 \\leq L &lt; 1$:</p>\n\n<p>Take $\\epsilon &gt; 0$ such that $L &lt; 1 - \\epsilon$. Proceed similarly to the $L &gt; 1$ case. $\\exists N \\in \\Bbb N$ such that $\\frac{f(n)}{n} &lt; 1 - \\epsilon \\quad \\forall n \\geq N \\iff f(n) &lt; (1-\\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\\epsilon M &gt; 1 \\iff (1-\\epsilon)M &lt; M - 1$. Then $\\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p>\n\n<p>Hence $L = 1$.</p>\n" }, { "answer_id": 2082608, "score": 1, "text": "<p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p>\n\n<p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>\n" } ]
Question: Limit problem involving bijective function Answer 1: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 2: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 3: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 4: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 5: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 6: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 7: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 8: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 9: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 10: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 11: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 12: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 13: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 14: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 15: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 16: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 17: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 18: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 19: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 20: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 21: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 22: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 23: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 24: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 25: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 26: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 27: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 28: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 29: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 30: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 31: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 32: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 33: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 34: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 35: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 36: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 37: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 38: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 39: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 40: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 41: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 42: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p> Answer 43: <p>Let <span class="math-container">$f$</span> be permutation of <span class="math-container">$\mathbb{N^*}$</span>. Suppose the inequality <span class="math-container">$f(n) \leq n$</span> holds for only a finite set of integers <span class="math-container">$A$</span>. Let <span class="math-container">$m = \max A$</span>, then <span class="math-container">$f \left( [m + 1,\infty) \cap \mathbb{N} \right ) \subset [m + 2,\infty) \cap \mathbb{N}$</span>, thus <span class="math-container">$f^{-1} \big ( [\![ 1,m+1 ]\!] \big ) \subset [\![ 1,m ]\!]$</span>, which is not possible.</p> <p>Thus <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \leq 1$</span> holds infinitely many times.</p> <p>Applying the same reasoning to <span class="math-container">$f^{-1}$</span> we get : <span class="math-container">$\frac{f^{-1}(n)}{n} \leq 1 $</span> <em>i.e</em> <span class="math-container">$f(n) \over {n}$</span> <span class="math-container">$ \geq 1$</span> holds infinitely many times.</p> <p>Since the limit exists, <span class="math-container">$\lim \frac{f(n)}{n} = 1$</span> .</p> Answer 44: <p>Let $L = \lim_{n \to \infty} \frac{f(n)}{n}$. Note $L \geq 0$. Consider the cases:</p> <p>$L &gt; 1$:</p> <p>Let $\epsilon &gt; 0$ be such that $L &gt; 1 + \epsilon$. Then $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &gt; 1 + \epsilon \quad \forall n \geq N$ $\iff f(n) &gt; (1+\epsilon)n$.</p> <p>Take $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1 + \epsilon)M &gt; M + 1$. Then $\forall n &gt; M, f(n) &gt; M + 1$, so all of $f^{-1}(1), f^{-1}(2), \dots f^{-1}(M + 1)$ must be given by $n_1, n_2, \dots, n_{M+1} \leq M$, but then by the pigeon-hole principle, at least two of the $n_i$ are equal which is a contradiction, as $f$ is a bijection.</p> <p>$0 \leq L &lt; 1$:</p> <p>Take $\epsilon &gt; 0$ such that $L &lt; 1 - \epsilon$. Proceed similarly to the $L &gt; 1$ case. $\exists N \in \Bbb N$ such that $\frac{f(n)}{n} &lt; 1 - \epsilon \quad \forall n \geq N \iff f(n) &lt; (1-\epsilon)n$. Choose $M &gt; N$ sufficiently large such that $\epsilon M &gt; 1 \iff (1-\epsilon)M &lt; M - 1$. Then $\forall n &gt; M, f(n) &lt; M - 1$, but this is clearly a contradiction since $f$ is a bijection.</p> <p>Hence $L = 1$.</p> Answer 45: <p>Because this limit is finite, exist $M$ and $n_0$ such that $f(n)/n&lt;M$ for all $n&gt;n_0$ ie is bounded ae.</p> <p>Now, because $f(n)&lt;Mn $ for all $n&gt;n_0$ then $f=O(n)$ and by definition the limit request is 1</p>
Fibonacci Sequence explained to a noob
2,082,706
7
https://math.stackexchange.com/questions/2082706/fibonacci-sequence-explained-to-a-noob
<p>I've wanted to code something. I decided to take up simulating the Fibonacci sequence. Except: I can't understand what I thought is a simple process.</p> <p>I want to start at the very beginning which from what I see is $(0,1)$.</p> <p>Well.</p> <p>$0 + 1 = 1$. I get the sum of the first two numbers $(0 , 1)$ and then the answer (sum again?).</p> <p>$1 + 1 = 2$... </p> <p>Isn't the Fibonacci sequence suppose to be $1 , 1 , 2 , 3,$ etc?</p> <p>I can't get those first two results and I notice that a lot of members are using some very scary symbols like $F_{k + 2} = F_k + F_{k +1}$. I don't understand how to move forward and I'm embarrassed to ask this in real life.</p> <p>Can someone explain this without the symbols?</p>
[ { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" }, { "answer_id": 2082716, "score": 4, "text": "<p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p>\n\n<p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p>\n\n<p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\\ldots$, which is the version you've seen. Whether the Fibonacci sequence is \"supposed\" to start with $0$ or with $1$ is really just a matter of taste.</p>\n" }, { "answer_id": 2082712, "score": 1, "text": "<p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\\ldots$ and $0,1,1,2,3\\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p>\n" }, { "answer_id": 2082721, "score": 1, "text": "<p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next.\n$$ 1+1 = 2 ... 1,2 \n\\\\ 1+2 = 3, .. 2,3\n\\\\ 2+3 = 5, .. 3,5\n\\\\ 3+5 = 8, $$</p>\n\n<p>you'll start to get the series.</p>\n\n<p>$1,1,2,3,5,8, \\cdots$</p>\n\n<p>To code the series of numbers, you can use an array say <code>fib</code></p>\n\n<pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2])\n</code></pre>\n\n<p>simple example of something in a loop.</p>\n" }, { "answer_id": 2082718, "score": 0, "text": "<p>The rule is this: to get the next number in the sequence, and the \"second to last\" and \"last\" numbers together.\n$$\n0+1=1\\\\\n1+1=2\\\\\n1+2=3\\\\\n2+3=5\n$$\nSo far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>\n" } ]
Question: Fibonacci Sequence explained to a noob Answer 1: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 2: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 3: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 4: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 5: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 6: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 7: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 8: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 9: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 10: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 11: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 12: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 13: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 14: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 15: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 16: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 17: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 18: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 19: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 20: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 21: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 22: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 23: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 24: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 25: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 26: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 27: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 28: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 29: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 30: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 31: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 32: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 33: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 34: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 35: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 36: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 37: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 38: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 39: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 40: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 41: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 42: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 43: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 44: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 45: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 46: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 47: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 48: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 49: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 50: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 51: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 52: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 53: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 54: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 55: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 56: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p> Answer 57: <p>Each member of the Fibonacci sequence is the sum of the previous two members. There are two standard ways of starting the sequence - you might start with $0$ and $1$, or with $1$ and $1$.</p> <p>Starting with $0$ and $1$, we have $0 + 1 = 1$; so the third member of our sequence is also $1$ and our sequence so far is $0,1,1$. $1+1 = 2$, so we now have $0,1,1,2$. $1+2=3$, so we have $0,1,1,2,3$. $2+3=5$, so we have $0,1,1,2,3,5$; and so on.</p> <p>If you start with $1$ and $1$, you're just starting one step later, so you get $1,1,2,3,5,\ldots$, which is the version you've seen. Whether the Fibonacci sequence is "supposed" to start with $0$ or with $1$ is really just a matter of taste.</p> Answer 58: <p>Unfortunately there are two common definitions of the Fibonacci sequence: $1,1,2,3,\ldots$ and $0,1,1,2,3\ldots,$ which is different only by the initial zero. The second is a bit more common. Note that they both share the property that a term is the sum of the two previous terms.</p> Answer 59: <p>It starts $1, 1$, - or $0$ and $1$, then you add the previous two numbers for the next. $$ 1+1 = 2 ... 1,2 \\ 1+2 = 3, .. 2,3 \\ 2+3 = 5, .. 3,5 \\ 3+5 = 8, $$</p> <p>you'll start to get the series.</p> <p>$1,1,2,3,5,8, \cdots$</p> <p>To code the series of numbers, you can use an array say <code>fib</code></p> <pre><code>fib.push(fib[fib.length-1]+fib[fib.length-2]) </code></pre> <p>simple example of something in a loop.</p> Answer 60: <p>The rule is this: to get the next number in the sequence, and the "second to last" and "last" numbers together. $$ 0+1=1\\ 1+1=2\\ 1+2=3\\ 2+3=5 $$ So far, our sequence is $0,1,1,2,3,5$. To get the next number, add the second to last number (3) to the last number (5). $3+5=8$, so the next number is $8$.</p>
Probability event with $70$% success rate occurs three consecutive times for sample size $n$
2,082,969
7
https://math.stackexchange.com/questions/2082969/probability-event-with-70-success-rate-occurs-three-consecutive-times-for-sam
<p>It has been a long time since I've done probability so I am not sure which to do (if either are correct). Thank you for taking the time to look at my work.</p> <p>Probability an event occurs is $70$%. I'm looking for the probability our event occurs three times in a row for sample size $n$.</p> <p>$(.7)^3=.343$ is the probability to occur three consecutive times</p> <p>$1-.343=.657$ would be the chance to fail.</p> <p>First idea:</p> <p>For $n=3$ our success rate is $.343$</p> <p>$n=4$ we have two opportunities for success, thus $1-(.657)^2=.568351$</p> <p>$n=5$, three opportunities for success, thus $1-(.657)^3=.71640$...</p> <p>Generalization: Probability for success: $$1 - (.657)^{n-2}$$</p> <p>Second idea:</p> <p>Probability when $n=3$ would be $(.7)^3$</p> <p>At $n=4$ we'd have $(.7)^3+(.3)(.7)^3$</p> <p>For $n=5$ we'd have $(.7)^3+(.3)(.7)^3+(.3)^2(.7)^3+(.3)(.7)^4$</p> <p>I'm leaning towards the second idea...but I'm failing to see a generalization for it.</p> <p>Please excuse my LaTeX it has been a long time since I've asked/answered any questions. Thank you.</p>
[ { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" }, { "answer_id": 2087355, "score": 4, "text": "<blockquote>\n <p>Let's denote the event under consideration with $a$\n \\begin{align*}\nP(X=a)=0.7\n\\end{align*}\n and we denote the complementary event with $b$.</p>\n</blockquote>\n\n<p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p>\n\n<blockquote>\n <p>We can describe the set of these invalid words as the words built from the alphabet $V=\\{a,b\\}$ which contain at most one or two consecutive $a$'s.</p>\n \n <p>These are words</p>\n \n <ul>\n <li><p>starting with zero or more $b$'s:$$b^*$$</p></li>\n <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li>\n <li><p>and terminating with zero, one or two $a$'s\n $$(\\varepsilon|a|aa)$$</p></li>\n </ul>\n \n <p>We obtain</p>\n \n <p>\\begin{align*}\nb^*(ab^+|aab^+)^*(\\varepsilon|a|aa)\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\\sum_{n=0}^\\infty a_n z^n$$ with\n$a_n$ giving the number of invalid words of length $n$.</p>\n\n<p>In order to do so all we need to know is the <em><a href=\"https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series\" rel=\"nofollow noreferrer\">geometric series expansion</a></em> since the $star$ operator\n\\begin{align*}\na^*=\\left(\\varepsilon|a|a^2|a^3|\\cdots\\right)\\qquad\\text{ translates to }\\qquad 1+z+z^2+z^3+\\cdots=\\frac{1}{1-z}\n\\end{align*}</p>\n\n<p>Accordingly $a^+=aa^*$ translates to $\\frac{z}{1-z}$ and alternatives like $(\\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p>\n\n<blockquote>\n <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p>\n \n <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p>\n \n <p>\\begin{align*}\nb^*\\left(ab^+|aab^+\\right)^*(\\varepsilon|a|aa)\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\left.\\frac{abz^2}{1-bz}\\right|\\frac{a^2bz^3}{1-bz}\\right)^*\\left(1+az+a^2z^2\\right)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\left(\\frac{abz^2+a^2bz^3}{1-bz}\\right)^*(1+az+a^2z^2)\\\\\n&amp;\\longrightarrow \\quad \\frac{1}{1-bz}\\cdot\\frac{1}{1-\\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\\\\n&amp;\\quad\\quad=\\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$\n \\begin{align*}\nA(z)&amp;=\\sum_{n=0}^\\infty(a+b)^nz^n-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{1}{1-(a+b)z}-\\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\\\\n&amp;=\\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\\\\n&amp;=a^3z^3+a^3(a+2b)z^4+a^3(\\color{blue}{1}a^2+\\color{blue}{4}ab+\\color{blue}{3}b^2)z^5\\\\\n&amp;\\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\\cdots\n\\end{align*}</p>\n \n <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\\color{blue}{1}+\\color{blue}{4}+\\color{blue}{3}=8$.</p>\n</blockquote>\n\n<p>$$ $$</p>\n\n<blockquote>\n <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\\color{blue}{\\text{blue}}$ in the table below.</p>\n \n <p>We obtain\n \\begin{array}{cccc}\n\\color{blue}{aaa}aa\\qquad&amp;ab\\color{blue}{aaa}\\qquad&amp;b\\color{blue}{aaa}a\\qquad&amp;bb\\color{blue}{aaa}\\\\\n\\color{blue}{aaa}ab\\qquad&amp;abaab\\qquad&amp;b\\color{blue}{aaa}b\\qquad&amp;bbaab\\\\\n\\color{blue}{aaa}ba\\qquad&amp;ababa\\qquad&amp;baaba\\qquad&amp;bbaba\\\\\n\\color{blue}{aaa}bb\\qquad&amp;ababb\\qquad&amp;baabb\\qquad&amp;bbabb\\\\\naabaa\\qquad&amp;abbaa\\qquad&amp;babaa\\qquad&amp;bbbaa\\\\\naabab\\qquad&amp;abbab\\qquad&amp;babab\\qquad&amp;bbbab\\\\\naabba\\qquad&amp;abbba\\qquad&amp;babba\\qquad&amp;bbbba\\\\\naabbb\\qquad&amp;abbbb\\qquad&amp;babbb\\qquad&amp;bbbbb\\\\\n\\end{array}</p>\n</blockquote>\n\n<p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p>\n\n<blockquote>\n <p>We conclude</p>\n \n <ul>\n <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href=\"http://oeis.org/A050231\" rel=\"nofollow noreferrer\">050231</a></em>.</li>\n </ul>\n \n <p>\\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=b=1}\n\\end{align*}</p>\n \n <ul>\n <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$.\n \\begin{align*}\n\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\n\\end{align*}</p></li>\n <li><p>We obtain for $n=0$ up to $n=7$\n \\begin{array}{c|cl}\nn&amp;\\left.[z^n]A(z)\\right|_{a=b=1}&amp;\\left.[z^n]A(z)\\right|_{a=0.7,b=0.3}\\\\\n\\hline\n0&amp;0&amp;0\\\\\n1&amp;0&amp;0\\\\\n2&amp;0&amp;0\\\\\n3&amp;1&amp;0.343\\\\\n4&amp;3&amp;0.4459\\\\\n5&amp;8&amp;0.5488\\\\\n6&amp;20&amp;0.6517\\\\\n7&amp;47&amp;0.7193\\\\\n\\end{array}</p></li>\n </ul>\n</blockquote>\n" }, { "answer_id": 2094765, "score": 2, "text": "<p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to \"guarantee\" a success with a given confidence level. </p>\n\n<p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p>\n\n<p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p>\n\n<p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3 (1-p)\\,\\,\\,\\, \\text {(for n=3 to 6)}$$</p>\n\n<p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p>\n\n<p>\\begin{aligned} P(3) &amp; =0.7^3=0.343 \\\\\nP (4) &amp; = 0.7^3+0.3 \\cdot 0.7^3 = 0.4459 \\\\\nP (5) &amp; = 0.7^3+2\\cdot 0.3 \\cdot 0.7^3 = 0.5488 \\\\\nP (6) &amp; = 0.7^3+3\\cdot 0.3 \\cdot 0.7^3 = 0.6517 \\\\\n\\end{aligned} </p>\n\n<p>already obtained in a previous answer via a different method.</p>\n\n<p>Now let us consider the values of $n \\geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\\,\\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \\,\\, $, and so on. This leads to the recurrence</p>\n\n<p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\\,\\,\\,\\, \\text{(for n}\\, \\geq \\text {7)}$$</p>\n\n<p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \\,\\, $. The recurrence becomes </p>\n\n<p>$$P (n+1)=P (n)+ jk-jk \\,P(n-4)\\, \\,$$</p>\n\n<p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$\n$$P (4)=j +jk$$\n$$P (5)=j +2jk$$\n$$P (6)=j +3jk$$</p>\n\n<p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\\,\\, $ to $n=10 \\,\\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p>\n\n<p>\\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\\\\nP (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\\\\nP (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\\\\nP (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\\\\n\\end{aligned}</p>\n\n<p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$,\n$$P (7)\\approx 0.7193$$\n$$P (8)\\approx 0.7763$$\n$$P (9) \\approx 0.8228$$\n$$P (10) \\approx 0.8586$$</p>\n\n<p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p>\n\n<p>\\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\\\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\\\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\\\\nP (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\\\\n\\end{align*}</p>\n\n<p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p>\n\n<ul>\n<li><p>in the odd positions, $\\lfloor (n+1)/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i-1} j^i k^{i-1} \\,\\, $$ where $$i=1,2... \\lfloor (n+1)/4 \\rfloor \\,\\, $$ For example, in the expansion of $P (14)$ there are $ \\lfloor (14+1)/4 \\rfloor =3 \\, \\,\\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li>\n<li><p>in the even positions, $\\lfloor n/4 \\rfloor \\,\\, $ terms of the form $$(-1)^{i-1} \\binom{n-3i}{i} j^i k^i \\,\\, $$ where $$i=1,2... \\lfloor n/4 \\rfloor$$</p>\n\n<p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \\lfloor 14/4 \\rfloor =3 \\,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li>\n</ul>\n\n<p>Therefore, we obtain the following general expression for $P (n) $:</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} j^{i}k^{i-1} +\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} j^i k^i $$</p>\n\n<p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} p^{3i} (1-p)^i $$</p>\n\n<p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor (n+1)/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i-1} 0.7^{3i} \\cdot 0.3^{i-1} + \\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i} 0.7^{3i}\\cdot 0.3^i $$</p>\n\n<p>Also note that the two sums have the same number of terms when $n \\not\\equiv -1 \\,\\, \\text {mod}\\,\\, 4$, whereas the first sum has one term more than the second one when $n \\equiv -1 \\,\\,\\text {mod} \\, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p>\n\n<p>$$\\displaystyle (-1)^{\\frac{(n+1)}{4}-1} \\binom{n-\\frac{3(n+1)}{4}}{ \\frac{(n+1)}{4}-1}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$ \n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\binom{\\frac{(n-3)}{4}}{\\frac{(n-3)}{4}}\\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$\n$$\\displaystyle = (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} $$</p>\n\n<p>So the probability asked in the OP can be alternatively expressed as</p>\n\n<p>$$P (n)=\\sum_{i}^{\\lfloor n/4 \\rfloor } (-1)^{i-1} \\binom{n-3i}{i}\\, 0.7^{3i}\\, 0.3^{i} \\left( \\frac {i}{0.3(n-4 i+1)} +1 \\right) + S$$</p>\n\n<p>where</p>\n\n<p>$$S = \\begin{cases} 0 &amp; \\mbox{if } n \\not\\equiv -1 \\, \\text {mod}\\, 4\\\\\n (-1)^{\\frac{(n-3)}{4}} \\, 0.7^{\\frac{3(n+1)}{4}} \\, 0.3^{\\frac{(n+1)}{4}} &amp; \\mbox{if } n \\equiv -1 \\, \\text {mod}\\, 4 \\end{cases}$$</p>\n\n<p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\\approx 0.0106\\, \\,\\, $ for $n=7$, becoming $\\approx 0.00109\\, \\,\\, $ for $n=11$ and $\\approx 0.000112\\,\\,\\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \\equiv -1 \\, \\text {mod}\\, 4$. </p>\n\n<p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\\%$ is $n=15$, since $P (15) \\approx 0.9553\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. The probability achieves $97.5\\%$ and $99\\%$ for $n=18$ and $n=22$, respectively, since $P(18) \\approx 0.9772\\,\\,\\, $ and $P(22) \\approx 0.9909\\,\\,\\, $, as confirmed by WA <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a> and <a href=\"http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D\" rel=\"nofollow noreferrer\">here</a>. </p>\n" }, { "answer_id": 2083011, "score": 1, "text": "<p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p>\n\n<p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href=\"https://en.wikipedia.org/wiki/Markov_chain\" rel=\"nofollow noreferrer\">finite state Markov chain</a>).</p>\n\n<p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how \"close\" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p>\n\n<p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p>\n\n<p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p>\n\n<p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p>\n\n<p>$$P(N+1,1) = 0.7P(N, 0)$$</p>\n\n<p>and</p>\n\n<p>$$P(N+1,2) = 0.7P(N, 1)$$</p>\n\n<p>Substituting these into the original equation for $P(N+1,0)$, we get:</p>\n\n<p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p>\n\n<p>Now, this is just a standard <a href=\"https://en.wikipedia.org/wiki/Recurrence_relation\" rel=\"nofollow noreferrer\">linear recurrence relation</a>. If $\\lambda_1$, $\\lambda_2$, and $\\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p>\n\n<p>$$P(N, 0) = a_1\\lambda_1^N + a_2\\lambda_2^N + a_3\\lambda_3^N$$</p>\n\n<p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p>\n" }, { "answer_id": 2095652, "score": 1, "text": "<p>Let $p=0.7$ be the probability of success, $\\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\\mathrm{g}(n)=1-\\mathrm{f}(n)$, where $\\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p>\n\n<p>It holds $\\mathrm{g}(0) = \\mathrm{g}(1) = \\mathrm{g}(2) = 1$.</p>\n\n<p>Then $\\mathrm{g}(n) = (1-p)\\mathrm{g}(n-1) + (1-p)p\\mathrm{g}(n-2) + (1-p)p^2\\mathrm{g}(n-3)$.</p>\n\n<p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p>\n" }, { "answer_id": 3809188, "score": 0, "text": "<p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href=\"https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf\" rel=\"nofollow noreferrer\">this paper</a> on generating functions that neatly explains the interconnection between them.</p>\n" } ]
Question: Probability event with $70$% success rate occurs three consecutive times for sample size $n$ Answer 1: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 2: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 3: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 4: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 5: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 6: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 7: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 8: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 9: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 10: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 11: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 12: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 13: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 14: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 15: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 16: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 17: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 18: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 19: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 20: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 21: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 22: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 23: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 24: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 25: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 26: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 27: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 28: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 29: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 30: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 31: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 32: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 33: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 34: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 35: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 36: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 37: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 38: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 39: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 40: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 41: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 42: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 43: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 44: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 45: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 46: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 47: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 48: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 49: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 50: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 51: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 52: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 53: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 54: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 55: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 56: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 57: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 58: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 59: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 60: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 61: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 62: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 63: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 64: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 65: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 66: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 67: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 68: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 69: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 70: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p> Answer 71: <blockquote> <p>Let's denote the event under consideration with $a$ \begin{align*} P(X=a)=0.7 \end{align*} and we denote the complementary event with $b$.</p> </blockquote> <p>We are looking for the words of length $n$ and their probability of occurrence which do not contain three or more consecutive $a$'s. The result is $1$ minus this probability.</p> <blockquote> <p>We can describe the set of these invalid words as the words built from the alphabet $V=\{a,b\}$ which contain at most one or two consecutive $a$'s.</p> <p>These are words</p> <ul> <li><p>starting with zero or more $b$'s:$$b^*$$</p></li> <li><p>followed by zero or more occurrences of $a$ or $aa$ each followed by one or more $b$'s: $$(ab^+|aab^+)^*$$</p></li> <li><p>and terminating with zero, one or two $a$'s $$(\varepsilon|a|aa)$$</p></li> </ul> <p>We obtain</p> <p>\begin{align*} b^*(ab^+|aab^+)^*(\varepsilon|a|aa)\tag{1} \end{align*}</p> </blockquote> <p>The regular expression (1) generates all invalid words in a <em>unique</em> manner. In such cases we can use it to derive a generating function $$\sum_{n=0}^\infty a_n z^n$$ with $a_n$ giving the number of invalid words of length $n$.</p> <p>In order to do so all we need to know is the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow noreferrer">geometric series expansion</a></em> since the $star$ operator \begin{align*} a^*=\left(\varepsilon|a|a^2|a^3|\cdots\right)\qquad\text{ translates to }\qquad 1+z+z^2+z^3+\cdots=\frac{1}{1-z} \end{align*}</p> <p>Accordingly $a^+=aa^*$ translates to $\frac{z}{1-z}$ and alternatives like $(\varepsilon|a|aa)$ can be written as $1+z+z^2$.</p> <blockquote> <p>We translate the regular expression (1) into a generating function (by mixing up somewhat the symbolic to provide some intermediate steps).</p> <p>Since we want to calculate the probabilities of occurrence of $P(X=a)$ we keep track of $a$ and $b$ by respecting them as corresponding factors in the generating function.</p> <p>\begin{align*} b^*\left(ab^+|aab^+\right)^*(\varepsilon|a|aa) &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\left.\frac{abz^2}{1-bz}\right|\frac{a^2bz^3}{1-bz}\right)^*\left(1+az+a^2z^2\right)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\left(\frac{abz^2+a^2bz^3}{1-bz}\right)^*(1+az+a^2z^2)\\ &amp;\longrightarrow \quad \frac{1}{1-bz}\cdot\frac{1}{1-\frac{abz^2+a^2bz^3}{1-bz}}(1+az+a^2z^2)\\ &amp;\quad\quad=\frac{1+az+a^2z^2}{1-bz-abz^2-a^2bz^3}\tag{1} \end{align*}</p> </blockquote> <p>$$ $$</p> <blockquote> <p>We conclude: The number of invalid words is given by (1). So, the number of valid words is the number of all words minus the number of invalid words. We obtain the generating function $A(z)$ \begin{align*} A(z)&amp;=\sum_{n=0}^\infty(a+b)^nz^n-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{1}{1-(a+b)z}-\frac{1+az+a^2z^2}{1-bz-az^2-a^2z^3}\\ &amp;=\frac{a^3z^3}{(1-(a+b)z)(1-bz-abz^2-a^2bz^3)}\\ &amp;=a^3z^3+a^3(a+2b)z^4+a^3(\color{blue}{1}a^2+\color{blue}{4}ab+\color{blue}{3}b^2)z^5\\ &amp;\qquad a^3(a+b)^2(a+4b)z^6+a^3(a^4+7a^3b+18a^2b^2+16ab^3+5b^4)z^7+\cdots \end{align*}</p> <p>The expansion was done with the help of Wolfram Alpha. We see that e.g. the number of valid words of length $5$ is $\color{blue}{1}+\color{blue}{4}+\color{blue}{3}=8$.</p> </blockquote> <p>$$ $$</p> <blockquote> <p>Out of $2^5=32$ binary words of length $5$ there are $8$ valid words which are marked $\color{blue}{\text{blue}}$ in the table below.</p> <p>We obtain \begin{array}{cccc} \color{blue}{aaa}aa\qquad&amp;ab\color{blue}{aaa}\qquad&amp;b\color{blue}{aaa}a\qquad&amp;bb\color{blue}{aaa}\\ \color{blue}{aaa}ab\qquad&amp;abaab\qquad&amp;b\color{blue}{aaa}b\qquad&amp;bbaab\\ \color{blue}{aaa}ba\qquad&amp;ababa\qquad&amp;baaba\qquad&amp;bbaba\\ \color{blue}{aaa}bb\qquad&amp;ababb\qquad&amp;baabb\qquad&amp;bbabb\\ aabaa\qquad&amp;abbaa\qquad&amp;babaa\qquad&amp;bbbaa\\ aabab\qquad&amp;abbab\qquad&amp;babab\qquad&amp;bbbab\\ aabba\qquad&amp;abbba\qquad&amp;babba\qquad&amp;bbbba\\ aabbb\qquad&amp;abbbb\qquad&amp;babbb\qquad&amp;bbbbb\\ \end{array}</p> </blockquote> <p>We denote with $[z^n]$ the coefficient of $z^n$ in a series.</p> <blockquote> <p>We conclude</p> <ul> <li>The number of occurrences of wanted words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=b=1$ and given as OEIS sequence <em><a href="http://oeis.org/A050231" rel="nofollow noreferrer">050231</a></em>.</li> </ul> <p>\begin{align*} \left.[z^n]A(z)\right|_{a=b=1} \end{align*}</p> <ul> <li><p>The wanted probability of valid words of size $n$ is the coefficient of $z^n$ of $A(z)$ evaluated at $a=0.7,b=0.3$. \begin{align*} \left.[z^n]A(z)\right|_{a=0.7,b=0.3} \end{align*}</p></li> <li><p>We obtain for $n=0$ up to $n=7$ \begin{array}{c|cl} n&amp;\left.[z^n]A(z)\right|_{a=b=1}&amp;\left.[z^n]A(z)\right|_{a=0.7,b=0.3}\\ \hline 0&amp;0&amp;0\\ 1&amp;0&amp;0\\ 2&amp;0&amp;0\\ 3&amp;1&amp;0.343\\ 4&amp;3&amp;0.4459\\ 5&amp;8&amp;0.5488\\ 6&amp;20&amp;0.6517\\ 7&amp;47&amp;0.7193\\ \end{array}</p></li> </ul> </blockquote> Answer 72: <p>I would propose an alternative - maybe easier - approach, trying to give a formula that provides a generalized answer for the probability of three events in a row, given the probability $p $ that the event occurs, as a function of the sample size $n$. As shown below, this approach may also be useful to determine an additional issue of this problem highlihted in one of the comments, i.e. the sample size needed to "guarantee" a success with a given confidence level. </p> <p>Let us call $P (n) $ the probability that our event occurs three times in a row within a sequence of $n $ trials. Also, for each trial, let us denote as Y the case that the our event occurs, and as N the case that it does not occur. So, in the specific case described in the OP, we are looking for the probability of getting a triple YYY in a sequence of $n $ elements when $p=0.7$.</p> <p>As a general rule, we can note that, passing from a sample size $n $ to a sample size $n+1$, the probability that our event occurs three times in a row increases only because there is the possibility that the triple YYY, not obtained in the first $n $ trials, occurs as a final triple in the sequence of $n+1$ trials. So, to calculate $P (n+1 )$ from $ P (n) $, we simply have to add the probability that the triple in a row is not obtained until a final sequence NYYY. Note that the N preceding the triple YYY is strictly necessary to satisfy the condition that the triple has not been obtained in the first $n $ trials. </p> <p>We can then start with $n=3$, for which we trivially have $P (3)=p^3$. For $n=4$, we have to add to $P (3) $ the probability to get a sequence NYYY, i.e. we have to add $p^3(1-p)$. Similarly, for $n=5$, we have to add to $P (4) $ the probability to get a final sequence NYYY (note that this is not affected by what happens in the first of the five trials), which means that we must add $p^3(1-p)$ as in the previous case. Lastly, for $n=6$, we have again to add to $P (5) $ the probability to get a final sequence NYYY (note that, as above, this is not affected by what happens in the first and in the second of the six trials). This leads to a further increase by $p^3(1-p)$. So, for this initial group of four values of in our probability progression, we directly obtain the recurrence</p> <p>$$P (n+1)=P (n)+ p^3 (1-p)\,\,\,\, \text {(for n=3 to 6)}$$</p> <p>Accordingly, for the specific case $p=0.7$, this formula gives the initial values </p> <p>\begin{aligned} P(3) &amp; =0.7^3=0.343 \\ P (4) &amp; = 0.7^3+0.3 \cdot 0.7^3 = 0.4459 \\ P (5) &amp; = 0.7^3+2\cdot 0.3 \cdot 0.7^3 = 0.5488 \\ P (6) &amp; = 0.7^3+3\cdot 0.3 \cdot 0.7^3 = 0.6517 \\ \end{aligned} </p> <p>already obtained in a previous answer via a different method.</p> <p>Now let us consider the values of $n \geq 7$. Here something changes, because the fact that the sequence ends with NYYY does not imply that the triple YYY has not occurred before, i.e. in the first $n-4$ trials. So, at each step, the probability of getting a final NYYY must be multiplied to the probability of failure in the first $n-4$ trials, which is $1-P (n-4 ) $. Therefore, for $n=7$ we have to add to $P (6) $ the probability given by $p^3(1-p) [1-P(3)]\,\,$, for $n=8$ we have to add to $P (7) $ the probability given by $p^3(1-p) [1-P(4)] \,\, $, and so on. This leads to the recurrence</p> <p>$$P (n+1)=P (n)+ p^3(1-p)[1-P(n-4)]\,\,\,\, \text{(for n}\, \geq \text {7)}$$</p> <p>We can now try to get a closed form for this last recurrence. Let us set, by simplicity, $p^3=j$ and $(1-p) =k \,\, $. The recurrence becomes </p> <p>$$P (n+1)=P (n)+ jk-jk \,P(n-4)\, \,$$</p> <p>Note that the four values of $P (n)$ already determined can be written as $$P (3)=j $$ $$P (4)=j +jk$$ $$P (5)=j +2jk$$ $$P (6)=j +3jk$$</p> <p>so that they contain only terms of degree $j $ or $jk $. Continuing our probability progression and focusing on the second group of four values of $n$ (from $n=7\,\, $ to $n=10 \,\, $), new terms of degree $j^2k $ and $j^2k^2$ appear applying the recurrence above (for the sake of readability, here I directly provide the resulting expressions avoiding the rather tedious step-by-step calcuations):</p> <p>\begin{aligned} P (7) &amp; = P(6)+ jk-jk P (3)=j+4jk-j^2k \\ P (8) &amp; = P(7)+ jk-jk P (4)= j+ 5jk -2j^2k-j^2k^2 \\ P (9) &amp; = P(8) + jk-jk P (5)= j+ 6jk -3j^2k-3j^2k^2 \\ P (10) &amp; = P (9) + jk-jk P (6)= j+ 7jk -4j^2k-6j^2k^2 \\ \end{aligned}</p> <p>Some pattern begins to emerge in these expansions. For the specific case $p=0.7$, $$P (7)\approx 0.7193$$ $$P (8)\approx 0.7763$$ $$P (9) \approx 0.8228$$ $$P (10) \approx 0.8586$$</p> <p>We can repeat the procedure, applying the same recurrence and continuing with the third group of four values of $n $ (i.e., $n=11$ to $14$). Again, new terms appear, in this case of degree $j^3k^2$ and $j^3 k^3$ (as above, I directly provide the resulting expressions):</p> <p>\begin{align*} P(11) &amp;=j+8jk-5j^2k-10j^2k^2+j^3k^2 \\ P(12) &amp;=j+9jk-6j^2k-15j^2k^2+3j^3k^2+j^3k^3 \\P(13) &amp; = j+10jk-7j^2k -21j^2k^2+6j^3k^2+4j^3k^3 \\ P (14) &amp; = j+11jk-8j^2k-28j^2k^2+10j^3k^2+10j^3k^3\\ \end{align*}</p> <p>The final pattern is therefore as follows. Our expansions of $P (n) $ include:</p> <ul> <li><p>in the odd positions, $\lfloor (n+1)/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i-1} j^i k^{i-1} \,\, $$ where $$i=1,2... \lfloor (n+1)/4 \rfloor \,\, $$ For example, in the expansion of $P (14)$ there are $ \lfloor (14+1)/4 \rfloor =3 \, \,\, $ terms of this form, which are $j $, $-8j^2k $, and $10 j^3k^2$;</p></li> <li><p>in the even positions, $\lfloor n/4 \rfloor \,\, $ terms of the form $$(-1)^{i-1} \binom{n-3i}{i} j^i k^i \,\, $$ where $$i=1,2... \lfloor n/4 \rfloor$$</p> <p>Accordingly, taking again for example the expansion of $P (14)$, there are $ \lfloor 14/4 \rfloor =3 \,$ terms of this form, which are $11jk$, $-28j^2k^2$, and $10 j^3k^3$.</p></li> </ul> <p>Therefore, we obtain the following general expression for $P (n) $:</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} j^{i}k^{i-1} +\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} j^i k^i $$</p> <p>Reminding that $j=p^3$ and $k=(1-p) $, we finally get</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} p^{3i}(1-p)^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} p^{3i} (1-p)^i $$</p> <p>Setting $p=0.7$, we obtain the probability asked in the OP for, which is</p> <p>$$P (n)=\sum_{i}^{\lfloor (n+1)/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i-1} 0.7^{3i} \cdot 0.3^{i-1} + \sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i} 0.7^{3i}\cdot 0.3^i $$</p> <p>Also note that the two sums have the same number of terms when $n \not\equiv -1 \,\, \text {mod}\,\, 4$, whereas the first sum has one term more than the second one when $n \equiv -1 \,\,\text {mod} \, 4$, because in this case $(n+1)/4$ is integer. This term is given by</p> <p>$$\displaystyle (-1)^{\frac{(n+1)}{4}-1} \binom{n-\frac{3(n+1)}{4}}{ \frac{(n+1)}{4}-1}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \binom{\frac{(n-3)}{4}}{\frac{(n-3)}{4}}\, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$ $$\displaystyle = (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} $$</p> <p>So the probability asked in the OP can be alternatively expressed as</p> <p>$$P (n)=\sum_{i}^{\lfloor n/4 \rfloor } (-1)^{i-1} \binom{n-3i}{i}\, 0.7^{3i}\, 0.3^{i} \left( \frac {i}{0.3(n-4 i+1)} +1 \right) + S$$</p> <p>where</p> <p>$$S = \begin{cases} 0 &amp; \mbox{if } n \not\equiv -1 \, \text {mod}\, 4\\ (-1)^{\frac{(n-3)}{4}} \, 0.7^{\frac{3(n+1)}{4}} \, 0.3^{\frac{(n+1)}{4}} &amp; \mbox{if } n \equiv -1 \, \text {mod}\, 4 \end{cases}$$</p> <p>It could also be observed that $|S|$ is relatively small and rapidly decreases for increasing $n$: its value is $\approx 0.0106\, \,\, $ for $n=7$, becoming $\approx 0.00109\, \,\, $ for $n=11$ and $\approx 0.000112\,\,\, $ for $n=15$, so that its contribute to the overall probability is nearly negligible. The last expansion without the $S$ term can therefore considered a valid approximation even for the case $n \equiv -1 \, \text {mod}\, 4$. </p> <p>The probability of three events in a row given by these formulas for $p=0.7$ and $n=3$ to $10$ have already been provided above. As expected, for increasing $n $, the value of $P (n)$ approaches $1$. The first value for which the probability achieves $95\%$ is $n=15$, since $P (15) \approx 0.9553\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=0.7%5E(12)%200.3%5E(4)%20%2B%20%20sum%20%20%5B(-1)%5E(i-1)%20(((15-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(15-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2015%2F4%7D%5D" rel="nofollow noreferrer">here</a>. The probability achieves $97.5\%$ and $99\%$ for $n=18$ and $n=22$, respectively, since $P(18) \approx 0.9772\,\,\, $ and $P(22) \approx 0.9909\,\,\, $, as confirmed by WA <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((18-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(18-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2018%2F4%7D%5D" rel="nofollow noreferrer">here</a> and <a href="http://www.wolframalpha.com/input/?i=sum%20%20%5B(-1)%5E(i-1)%20(((22-3i)%20choose%20i)%200.7%5E(3i)%200.3%5Ei%20(i%2F(0.3(22-4%20i%2B1))%2B1)%20%20%20),%7Bi,%2022%2F4%7D%5D" rel="nofollow noreferrer">here</a>. </p> Answer 73: <p>The issue with looking at opportunities for success is that they are not completely independent. For example, in a row of size 4, if the first three aren't all successes, this greatly decreases the chance that the last three aren't all successes (the only way this can happen is if only the first trial was a failure).</p> <p>You can still get this approach to work (via correcting for events which are dependent via some sort of inclusion-exclusion), but generally a cleaner method for these types of problems is to write down a recurrence (or more generally, model them as a <a href="https://en.wikipedia.org/wiki/Markov_chain" rel="nofollow noreferrer">finite state Markov chain</a>).</p> <p>Let's let $P(N)$ be the probability a row of length $N$ has no 3 consecutive successes. To help us write down a recurrence, we'll break $P(N)$ down into 3 smaller probabilities. Let's let $P(N, 0)$ be the probability a row of length $N$ has no 3 consecutive successes AND that the last attempt was a failure. Similarly, let $P(N,1)$ be the probability that a row of length $N$ has no 3 consecutive successes AND that the last attempt was a success, but the second-last attempt was a failure, and finally, let $P(N,2)$ be the probability that a row of length $N$ has no 3 consecutive successes AND the last two attempts were successes, but the third-to-last attempt was a failure. Intuitively, these different cases measure how "close" we are to getting 3 successes in a row. It's not too hard to see that $P(N) = P(N,0) + P(N,1) + P(N,2)$, since if the last three attempts are all successes, you've already won.</p> <p>Now, let's try to write $P(N+1, 0)$ in terms of smaller cases. The probability you have no 3 consecutive successes in a row of length $N+1$ and that your last attempt was a failure is simply the probability that there are no 3 consecutive successes in the first $N$ tries and that you fail the $N+1$st try. This gives us the equation:</p> <p>$$P(N+1,0) = 0.3P(N) = 0.3P(N,0) + 0.3P(N,1) + 0.3P(N,2)$$</p> <p>Via similar logic, we can show that $P(N+1,1)$ and $P(N+1,2)$ satisfy the following equations:</p> <p>$$P(N+1,1) = 0.7P(N, 0)$$</p> <p>and</p> <p>$$P(N+1,2) = 0.7P(N, 1)$$</p> <p>Substituting these into the original equation for $P(N+1,0)$, we get:</p> <p>$$P(N+1, 0) = 0.3P(N,0) + 0.21P(N-1, 0) + 0.147P(N-2,0)$$</p> <p>Now, this is just a standard <a href="https://en.wikipedia.org/wiki/Recurrence_relation" rel="nofollow noreferrer">linear recurrence relation</a>. If $\lambda_1$, $\lambda_2$, and $\lambda_3$ are the roots of $z^3 - 0.3z^2-0.21z-0.147=0$, then there are some constants $a_1, a_2, a_3$ which you can solve for so that</p> <p>$$P(N, 0) = a_1\lambda_1^N + a_2\lambda_2^N + a_3\lambda_3^N$$</p> <p>Since $P(N) = P(N+1,0)/0.3$, this gives a similar formula for $P(N)$ (and ultimately for $1-P(N)$, the probability a success does occur 3 consecutive times). </p> Answer 74: <p>Let $p=0.7$ be the probability of success, $\mathrm{f}(n)$ be the probability of a sequence of length $n$ having 3 consecutive successes and $\mathrm{g}(n)=1-\mathrm{f}(n)$, where $\mathrm{g}(n)$ corresponds to the probability of a sequence of length $n$ having at most 2 consecutive successes.</p> <p>It holds $\mathrm{g}(0) = \mathrm{g}(1) = \mathrm{g}(2) = 1$.</p> <p>Then $\mathrm{g}(n) = (1-p)\mathrm{g}(n-1) + (1-p)p\mathrm{g}(n-2) + (1-p)p^2\mathrm{g}(n-3)$.</p> <p>This holds because all sequences of length $n$ that have at most 2 consecutive successes can be constructed from (1) all such sequences of length $n-1$ suffixed by a failure plus; (2) all such sequences of length $n-2$ suffixed by a failure and a success, plus; (3) all such sequences of length $n-3$ suffixed by a failure and two successes.</p> Answer 75: <p>I saw that the answers to the question include either regex (Automata Theory) or linear recurrences (Linear Algebra), so for those interested I decided to post <a href="https://people.math.carleton.ca/%7Ebsteinbg/classes/automata2009/generatingfunctions.pdf" rel="nofollow noreferrer">this paper</a> on generating functions that neatly explains the interconnection between them.</p>
lower bound for a function of $\Gamma$
2,083,548
7
https://math.stackexchange.com/questions/2083548/lower-bound-for-a-function-of-gamma
<p>I would like to show that $$\frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}\geq 1$$ for real $x$ such that $|x|\leq \frac12$, where $\Gamma$ is the usual gamma function.</p> <p>I looked at the derivatives and went a long road to prove this. I though somehow believe there should be a simpler way. I appreciate any hints or comments. Many thanks!</p>
[ { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" }, { "answer_id": 2083625, "score": 3, "text": "<p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p>\n\n<p>Next, we can write for $0&lt;x&lt;1/2$</p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=x\\frac{\\Gamma(1-2x)\\Gamma(x)}{\\Gamma(1-x)}\\\\\\\\\n&amp;=xB(1-2x,x)\\\\\\\\\n&amp;=x\\int_0^1 t^{-2x}(1-t)^{x-1}\\,dt\\\\\\\\\n&amp;\\ge x\\int_0^1(1-t)^{x-1}\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n\n<hr>\n\n<p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p>\n\n<p>$$\\begin{align}\n\\frac{\\Gamma(1-2x)\\Gamma(1+x)}{\\Gamma(1-x)}&amp;=\\frac{\\Gamma(1+2y)\\Gamma(1-y)}{\\Gamma(1+y)}\\\\\\\\\n&amp;=2yB(2y,1-y)\\\\\\\\\n&amp;=2y\\int_0^1 t^{2y-1}(1-t)^{-y}\\,dt\\\\\\\\\n&amp;\\ge 2y\\int_0^1 t^{2y-1}\\,dt\\\\\\\\\n&amp;=1\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2083633, "score": 2, "text": "<p>Since\n$$ \\Gamma(1+t)=e^{-\\gamma t}\\prod_{n\\geq 1}\\left(1+\\frac{t}{n}\\right)^{-1}e^{t/n} \\tag{1}$$\nwe have:\n$$ h(x)=\\frac{\\Gamma(1-2x)\\,\\Gamma(1+x)}{\\Gamma(1-x)}=\\prod_{n\\geq 1}\\frac{\\left(1-\\frac{x}{n}\\right)}{\\left(1-\\frac{2x}{n}\\right)\\left(1+\\frac{x}{n}\\right)}\\tag{2}$$\nand it is straightforward to check that $h(x)$ is a log-convex function on $I=\\left(-\\frac{1}{2},\\frac{1}{2}\\right)$.<br>\nSince $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p>\n\n<hr>\n\n<p>We may even compute the Taylor series of $\\log h(x)$. Since:\n$$ \\sum_{n\\geq 2}\\frac{\\zeta(n)}{n} t^n = -\\gamma t+\\log\\Gamma(1-t)\\tag{3} $$\nwe have:\n$$ \\log h(x) = \\sum_{n\\geq 2}\\color{red}{\\frac{\\zeta(n)}{n}(2^n+(-1)^n-1)}\\, x^n.\\tag{4}$$</p>\n" } ]
Question: lower bound for a function of $\Gamma$ Answer 1: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 2: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 3: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 4: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 5: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 6: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 7: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 8: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 9: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 10: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 11: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 12: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 13: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 14: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 15: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 16: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 17: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 18: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 19: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 20: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 21: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 22: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 23: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 24: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 25: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 26: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 27: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 28: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p> Answer 29: <p>Note that the cases for $x=0$ and $x=-1/2$ are trivial.</p> <p>Next, we can write for $0&lt;x&lt;1/2$</p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=x\frac{\Gamma(1-2x)\Gamma(x)}{\Gamma(1-x)}\\\\ &amp;=xB(1-2x,x)\\\\ &amp;=x\int_0^1 t^{-2x}(1-t)^{x-1}\,dt\\\\ &amp;\ge x\int_0^1(1-t)^{x-1}\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> <hr> <p>For $-1/2&lt;x&lt;0$, we let $y=-x$ and write </p> <p>$$\begin{align} \frac{\Gamma(1-2x)\Gamma(1+x)}{\Gamma(1-x)}&amp;=\frac{\Gamma(1+2y)\Gamma(1-y)}{\Gamma(1+y)}\\\\ &amp;=2yB(2y,1-y)\\\\ &amp;=2y\int_0^1 t^{2y-1}(1-t)^{-y}\,dt\\\\ &amp;\ge 2y\int_0^1 t^{2y-1}\,dt\\\\ &amp;=1 \end{align}$$</p> <p>as was to be shown!</p> Answer 30: <p>Since $$ \Gamma(1+t)=e^{-\gamma t}\prod_{n\geq 1}\left(1+\frac{t}{n}\right)^{-1}e^{t/n} \tag{1}$$ we have: $$ h(x)=\frac{\Gamma(1-2x)\,\Gamma(1+x)}{\Gamma(1-x)}=\prod_{n\geq 1}\frac{\left(1-\frac{x}{n}\right)}{\left(1-\frac{2x}{n}\right)\left(1+\frac{x}{n}\right)}\tag{2}$$ and it is straightforward to check that $h(x)$ is a log-convex function on $I=\left(-\frac{1}{2},\frac{1}{2}\right)$.<br> Since $h'(0)=0$, $h(0)=1$ is a local minimum for $h(x)$ on $I$.</p> <hr> <p>We may even compute the Taylor series of $\log h(x)$. Since: $$ \sum_{n\geq 2}\frac{\zeta(n)}{n} t^n = -\gamma t+\log\Gamma(1-t)\tag{3} $$ we have: $$ \log h(x) = \sum_{n\geq 2}\color{red}{\frac{\zeta(n)}{n}(2^n+(-1)^n-1)}\, x^n.\tag{4}$$</p>
Maximum Number of Regions in the Plane Using Zig Lines?
2,084,177
7
https://math.stackexchange.com/questions/2084177/maximum-number-of-regions-in-the-plane-using-zig-lines
<p>I'm having trouble understanding how the book came up with the formula for calculating $Z_n$, the maximum number of regions created using $n$ zig lines (a zig line is a line containing just one sharp bend). Earlier in the <a href="https://notendur.hi.is/pgg/(ebook-pdf)%20-%20Mathematics%20-%20Concrete%20Mathematics.pdf" rel="nofollow noreferrer">book</a> (Page 5), we derived $L_n$, the maximum number of regions created using $n$ straight lines. What follows is the derivation of $Z_n$:</p> <blockquote> <p><a href="https://i.stack.imgur.com/1zFU0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1zFU0.png" alt=""></a></p> <p><a href="https://i.stack.imgur.com/7E7LT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7E7LT.png" alt=""></a></p> </blockquote> <p>First, I'm not exactly sure what the underlined sentence means. From the picture of $Z_2$, my only guess at its meaning is that none of the zigs can contain the zig point of another zig.</p> <p>My other question is about the assumptions that the book seems to be making in its argument (written in color):</p> <p>1) Imagine $n$ zigs, and assume they split the plane into $Z_n$ regions. Extend all the lines as in the example. This creates a plane split by $2n$ lines $\color{red}{\text{into } L_{2n} \text{regions}}.$</p> <p>2) Now, erase the extensions. For each extension we erase, $\color{blue}{\text{we lose 2 regions}}.$</p> <p>3) Therefore $Z_n$ zigs split the plane into $L_{2n}-2n$ regions.</p> <p>The blue assumption surely has something to do with my underlined sentence, but its not obvious to me that in very complicated figure with many intersections, we lose exactly two regions per deletion of one extensions.</p> <p>Could someone help make the justification for these assumptions more clear? </p> <p>Thanks.</p>
[ { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" }, { "answer_id": 2084205, "score": 5, "text": "<p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\\ell_1$ that intersects all $2n$ rays. Then you draw a line $\\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\\ell_1$ and $\\ell_2$, and for $i=1,2$ let $\\{p_{i,k}:k=1,\\ldots,2n\\}$ be the set of points of intersection of $\\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\\ell_2$. When we erase half of $\\ell_1$ and half of $\\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\\ell_1$ that contains the points $p_{1,k}$ and the half of $\\ell_2$ that contains the points $p_{2,k}$.</p>\n\n<p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$, but we can show this by induction on $n$.</p>\n\n<p>The line $\\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\\ell_2$ cut each of the $2n$ original rays <em>and</em> $\\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\\ell_1$ and $\\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p>\n\n<p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p>\n\n<p>$$\\begin{align*}\nZ_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\\\\n&amp;=L_{2n}+2n+1\\\\\n&amp;=L_{2n+1}\\\\\n&amp;=L_{2n+2}-(2n+2)\\;,\n\\end{align*}$$</p>\n\n<p>and since $Z_0=1=L_0-2\\cdot0\\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\\in\\Bbb N$.</p>\n" }, { "answer_id": 3129308, "score": 0, "text": "<p>Alternate Solution :</p>\n\n<p>Suppose we already have the number of regions formed by first <span class=\"math-container\">$n-1$</span> bent lines i.e. <span class=\"math-container\">$Z_{n-1}$</span>.</p>\n\n<p>Now we add the <span class=\"math-container\">$n^{th}$</span> bent line, the point of intersection(let it be <span class=\"math-container\">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class=\"math-container\">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class=\"math-container\">$n$</span> straight lines. Since there are 2 new lines this will create <span class=\"math-container\">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class=\"math-container\">$P$</span>). So recurrence can be written as :</p>\n\n<p><span class=\"math-container\">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span>\n<span class=\"math-container\">$$Z_n = Z_{n-1} + 4n - 3$$</span>\nwith base case <span class=\"math-container\">$Z_0 = 1$</span>\n<span class=\"math-container\">$$Z_n = 4\\sum_{i=1}^{n}n - 3\\sum_{i=1}^{n}1 + Z_0$$</span>\n<span class=\"math-container\">$$Z_n = 4.\\frac{n(n+1)}{2} -3n + 1$$</span>\n<span class=\"math-container\">$$Z_n = 2n^2 - n+1$$</span></p>\n" } ]
Question: Maximum Number of Regions in the Plane Using Zig Lines? Answer 1: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 2: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 3: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 4: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 5: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 6: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 7: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 8: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 9: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 10: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 11: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 12: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 13: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 14: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 15: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 16: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 17: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 18: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 19: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 20: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 21: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 22: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 23: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 24: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 25: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 26: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 27: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 28: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p> Answer 29: <p>Suppose that you already have $n$ zig lines in place, for a total of $2n$ rays. You draw a line $\ell_1$ that intersects all $2n$ rays. Then you draw a line $\ell_2$ that intersects that line <em>and</em> all $2n$ rays. Let $p$ be the point of intersection of $\ell_1$ and $\ell_2$, and for $i=1,2$ let $\{p_{i,k}:k=1,\ldots,2n\}$ be the set of points of intersection of $\ell_i$ with the $2n$ original rays. The underlined statement means that the points $p_{1,k}$ must all lie on the same side of $p$ on the line $\ell_1$, and the points $p_{2,k}$ must all lie on the same side of $p$ on the line $\ell_2$. When we erase half of $\ell_1$ and half of $\ell_2$ to convert them to a zig line with its bend at $p$, we’ll keep the half of $\ell_1$ that contains the points $p_{1,k}$ and the half of $\ell_2$ that contains the points $p_{2,k}$.</p> <p>I don’t think that it’s immediately clear that this ensures that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$, but we can show this by induction on $n$.</p> <p>The line $\ell_1$ cut each of the original $2n$ rays, so it split $2n+1$ regions in two and therefore added $2n+1$ regions. Then $\ell_2$ cut each of the $2n$ original rays <em>and</em> $\ell_1$, so it split $2n+2$ regions in two and so added another $2n+2$ regions. However, when we erase the halves of $\ell_1$ and $\ell_2$ that don’t contain the points of intersection with the original rays, we collapse three regions into one: the region between the two rays that we’re erasing merges with the two regions adjacent two it across those erased rays, and of course that means that they merge into each other as well. (Note that the original $2n$ rays never enter the region between the two rays that we’re erasing, and exactly what they do outside of it is irrelevant to this point.)</p> <p>Assuming as an induction hypothesis that $Z_n=L_{2n}-2n$, this shows that </p> <p>$$\begin{align*} Z_{n+1}&amp;=Z_n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}-2n+(2n+1)+(2n+2)-2\\ &amp;=L_{2n}+2n+1\\ &amp;=L_{2n+1}\\ &amp;=L_{2n+2}-(2n+2)\;, \end{align*}$$</p> <p>and since $Z_0=1=L_0-2\cdot0\cdot1$, we have a proof by induction that $Z_n=L_{2n}-2n$ for each $n\in\Bbb N$.</p> Answer 30: <p>Alternate Solution :</p> <p>Suppose we already have the number of regions formed by first <span class="math-container">$n-1$</span> bent lines i.e. <span class="math-container">$Z_{n-1}$</span>.</p> <p>Now we add the <span class="math-container">$n^{th}$</span> bent line, the point of intersection(let it be <span class="math-container">$P$</span>) will lie in certain arbitrary region and now the 2 half lines can be made to intersect all currently existing line which are L = <span class="math-container">$2*(n-1)$</span> in number. Since each intersection will result in a new region being formed just as the case with <span class="math-container">$n$</span> straight lines. Since there are 2 new lines this will create <span class="math-container">$2*L = 2*2*(n-1)$</span> new region and one extra region formed by intersection at the bent (point <span class="math-container">$P$</span>). So recurrence can be written as :</p> <p><span class="math-container">$$Z_n = Z_{n-1} + 2*2*(n-1) + 1$$</span> <span class="math-container">$$Z_n = Z_{n-1} + 4n - 3$$</span> with base case <span class="math-container">$Z_0 = 1$</span> <span class="math-container">$$Z_n = 4\sum_{i=1}^{n}n - 3\sum_{i=1}^{n}1 + Z_0$$</span> <span class="math-container">$$Z_n = 4.\frac{n(n+1)}{2} -3n + 1$$</span> <span class="math-container">$$Z_n = 2n^2 - n+1$$</span></p>
About a type of integral domains
2,084,756
7
https://math.stackexchange.com/questions/2084756/about-a-type-of-integral-domains
<p>I was reading the notes about <a href="http://alpha.math.uga.edu/%7Epete/factorization2010.pdf" rel="nofollow noreferrer">factorization</a> written by Pete L. Clark and I found that he uses the name &quot;<em>EL-domain</em>&quot; to refer to those integral domains in which every irreducible element is prime (page 16).</p> <blockquote> <p>So my question is: is the name &quot;EL-domain&quot; an accepted terminology? I made some google search and I haven't found anything. Perhaps this terminology has been used in some papers, but I don't know. Any help is appreciated.</p> </blockquote>
[ { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" }, { "answer_id": 2084797, "score": 4, "text": "<p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href=\"http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001\" rel=\"nofollow noreferrer\">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p>\n<p><strong>Remark</strong> <span class=\"math-container\">$\\ $</span> See <a href=\"https://math.stackexchange.com/a/164589/242\">this answer</a> for common names of many closely related properties.</p>\n" }, { "answer_id": 2094218, "score": 4, "text": "<p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for \"Euclid's Lemma.\" I would not have used the term \"AP domains\" back then, because back then I used the term \"irreducible\" instead of \"atom,\" even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href=\"http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf\" rel=\"nofollow noreferrer\">changed my mind</a>.)</p>\n\n<p>To be honest, both terms \"EL-domain\" and \"AP-domain\" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles (\"atomic domain\"; I used to write \"factorization domain,\" but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p>\n\n<p>A good way to think about \"EL-domain\" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) \"non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it).\" Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p>\n\n<ul>\n<li>PID $\\mapsto$ Bezout domain</li>\n<li>Dedekind domain $\\mapsto$ Prufer domain</li>\n<li>Integrally closed Noetherian domain $\\mapsto$ Krull domain </li>\n</ul>\n\n<p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>\n" } ]
Question: About a type of integral domains Answer 1: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 2: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 3: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 4: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 5: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 6: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 7: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 8: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 9: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 10: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 11: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 12: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 13: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 14: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 15: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 16: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 17: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 18: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 19: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 20: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 21: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 22: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 23: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 24: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 25: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 26: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 27: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 28: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p> Answer 29: <p>It stands for &quot;Euclid's Lemma&quot; as Pete <a href="http://at.yorku.ca/b/ask-an-algebraist/2008/3296.htm#3296.0002.0001" rel="nofollow noreferrer">mentions here.</a>. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory.</p> <p><strong>Remark</strong> <span class="math-container">$\ $</span> See <a href="https://math.stackexchange.com/a/164589/242">this answer</a> for common names of many closely related properties.</p> Answer 30: <p>Indeed this is not standard terminology: I made it up when writing the notes. As Bill Dubuque says, it stands for "Euclid's Lemma." I would not have used the term "AP domains" back then, because back then I used the term "irreducible" instead of "atom," even though I knew that the literature on factorization theory went the other way around. (Since then I have <a href="http://alpha.math.uga.edu/~pete/CGP15_December_14.pdf" rel="nofollow noreferrer">changed my mind</a>.)</p> <p>To be honest, both terms "EL-domain" and "AP-domain" seem pretty bad to me. Let me say a little bit about why things worked out this way. The property that irreducibles are prime is an important one, but in the majority of mathematical usage it is drowned out by the more important property of being a <strong>UFD</strong> or a <strong>factorial domain</strong>. If you look at Theorem 15.8 of my notes, you'll see that a domain is a UFD iff all irreducibles are primes and every nonzero nonunit factors as a product of irreducibles ("atomic domain"; I used to write "factorization domain," but I switched that one over eventually). The condition of a domain being atomic is much weaker than it being Noetherian, and the way commutative algebra has developed, Noetherian rings have gotten the lion's share of people's attention.</p> <p>A good way to think about "EL-domain" is as a meaningful way of keeping the uniqueness of factorizations but not their existence. In this it joins many other classes of rings which are roughly (there is not necessarily a canonical or unique way to do it) "non-Noetherian analogues of more familiar properties that imply Noetherianness (or something like it)." Defining and studying such non-Noetherian analogues is in fact an important theme in contemporary commutative algebra, and it comes up in my notes as well. But the terminology for this is all over the place:</p> <ul> <li>PID $\mapsto$ Bezout domain</li> <li>Dedekind domain $\mapsto$ Prufer domain</li> <li>Integrally closed Noetherian domain $\mapsto$ Krull domain </li> </ul> <p>As Serge Lang once said, the terminology should be functorial with respect to the ideas. That is unfortunately not really the case here.</p>
How do I find the slope of an angle bisector, given the equations of the two lines that form the angle?
2,084,774
7
https://math.stackexchange.com/questions/2084774/how-do-i-find-the-slope-of-an-angle-bisector-given-the-equations-of-the-two-lin
<p>The equation for the first line is $y = \frac{1}{2}x - 2$, and the equation for the second line is $y = 2x + 1$. They intersect at $(-2, -3)$.</p> <p>Someone told me I can just average the slopes of the two lines to find the slope of the bisector, but I'm not sure if it's right.</p>
[ { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" }, { "answer_id": 2084851, "score": 4, "text": "<p>If you have two lines </p>\n\n<p>$$\nL_1 : a_1x+b_1y+c_2 = 0\n$$</p>\n\n<p>and </p>\n\n<p>$$\nL_2 : a_2x + b_2y + c_2 = 0,\n$$</p>\n\n<p>then the equation of their bisectors is given by</p>\n\n<p>$$\n\\frac{a_1x+b_1y+c_1}{\\sqrt{a_1^2+b_1^2}} = \\pm \\frac{a_2x+b_2y+c_2}{\\sqrt{a_2^2+b_2^2}}.\n$$</p>\n\n<p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p>\n" }, { "answer_id": 2084779, "score": 3, "text": "<p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p>\n" }, { "answer_id": 2085104, "score": 3, "text": "<p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p>\n\n<p>$$\n\\frac{Ax+By+C}{\\sqrt{A^2+B^2}} = \\pm \\frac{ax+by+c}{\\sqrt{a^2+b^2}}.\n$$</p>\n\n<p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p>\n" }, { "answer_id": 2084877, "score": 2, "text": "<p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br>\nTo determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p>\n\n<ul>\n<li>if that is positive, then their sum will be parallel to the acute\nbisectrix and their difference to the obtuse one</li>\n<li>viceversa if the dot product is negative.</li>\n</ul>\n\n<p><em>note:</em> this method is valid also in 3D. \nIn 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p>\n\n<p><strong>in your example</strong> (using parallel vectors) </p>\n\n<p><a href=\"https://i.stack.imgur.com/FO99t.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/FO99t.png\" alt=\"enter image description here\"></a></p>\n\n<p>Rewrite the line equations in the <em>proportional</em> form\n$$\n\\left\\{ \\begin{gathered}\n y = \\frac{1}\n{2}x - 2\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{2} = \\frac{{y - \\left( { - 2} \\right)}}\n{1} \\hfill \\\\\n y = 2x + 1\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{{1/2}} = \\frac{{y - 1}}\n{1}\\quad \\Rightarrow \\quad \\frac{{x - 0}}\n{1} = \\frac{{y - 1}}\n{2} \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n\n<p>then the parallel unitary vectors are\n$$\n\\mathbf{u} = \\frac{1}\n{{\\sqrt 5 }}\\left( {2,1} \\right)\\quad \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1,2} \\right)\\quad 0 &lt; \\mathbf{u} \\cdot \\mathbf{v} = \\frac{4}\n{5}\n$$\nand the angle between them is acute. Their sum and difference is\n$$\n\\mathbf{u} + \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {3,3} \\right)\\quad \\mathbf{u} - \\mathbf{v} = \\frac{1}\n{{\\sqrt 5 }}\\left( {1, - 1} \\right)\n$$\nand the equations of the bisecting lines will be:\n$$\n\\left\\{ \\begin{gathered}\n \\frac{{x - x_{\\,c} }}\n{3} = \\frac{{y - y_{\\,c} }}\n{3}\\quad \\Rightarrow \\quad x + 2 = y + 3\\quad \\Rightarrow \\quad y = x - 1\\;\\;acute \\hfill \\\\\n \\frac{{x - x_{\\,c} }}\n{1} = \\frac{{y - y_{\\,c} }}\n{{ - 1}}\\quad \\Rightarrow \\quad x + 2 = - y - 3\\quad \\quad \\Rightarrow \\quad y = - x - 5\\;\\;obtuse \\hfill \\\\ \n\\end{gathered} \\right.\n$$</p>\n" }, { "answer_id": 2085079, "score": 2, "text": "<p>Line 1 is $y = \\frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p>\n\n<p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\\sqrt {1^2 + 2^2} = \\sqrt{5}$.</p>\n\n<p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\\sqrt {2^2 + 1^2} = \\sqrt{5}$.</p>\n\n<p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\\frac 12, -1\\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\\frac 12, -1\\frac 12)$ so the slope is $\\frac{-1\\frac 12 - (-3)}{-1\\frac 12 -(-2)} = \\frac {1\\frac 12}{1\\frac 12} = 1$. </p>\n\n<p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \\sqrt{5}$ so $\\triangle BAC$ is isoceles, and the angle bisector of $\\angle BAC$ passes through the midpoint of $BC$.</p>\n\n<p>==== details in general ====</p>\n\n<p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p>\n\n<p>Bear with me.</p>\n\n<p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p>\n\n<p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\\delta $ in the $x$ direction and $m*\\delta $ in the $y$ direction so your total distance is $\\sqrt{\\delta^2 + m^2\\delta^2} = 1$.</p>\n\n<p>So $\\delta\\sqrt{1 + m^2} = 1$ so $\\delta = \\frac 1{\\sqrt{1 + m^2}}$.</p>\n\n<p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \\frac 1{\\sqrt{1 + m^2}}, v + \\frac m{\\sqrt{1 + m^2}})$.</p>\n\n<p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \\frac 1{\\sqrt{1 + n^2}}, v + \\frac n{\\sqrt{1 + n^2}})$</p>\n\n<p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p>\n\n<p>The midpoint is $(x_m, y_m) = (u + \\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2, w + \\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2)$.</p>\n\n<p>So.... the slope of the angle bisector will be:</p>\n\n<p>$\\frac {y_m - v}{x_m - u}= \\frac{\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}2}{\\frac{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}2}=$</p>\n\n<p>$\\frac{[\\frac m{\\sqrt{1 + m^2}}]+[\\frac n{\\sqrt{1 + n^2}}]}{[\\frac 1{\\sqrt{1 + m^2}}]+[\\frac 1{\\sqrt{1 + n^2}}]}=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n\n<p>$[\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}=\\frac{\\frac 12\\sqrt{1 + 2^2}+2\\sqrt{1 + \\frac 12^2}}{\\sqrt{1 + \\frac 12^2}+\\sqrt{1 + 2^2}}=\\frac{\\sqrt{5}/2+ \\sqrt{5}}{\\sqrt{5}+ \\sqrt{5}/2}=1]$</p>\n\n<p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p>\n\n<p>You might be able to simplify that equation further. </p>\n\n<p>=====</p>\n\n<p>That equation is sort of an \"average\"; just not a standard arithmetic average.</p>\n\n<p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p>\n\n<p>If you know trigonemetry:</p>\n\n<p>Slope = $\\frac{rise}{run} = \\frac{\\sin \\theta}{\\cos \\theta} = \\tan \\theta$ so the angle of a line is $\\theta = \\arctan m$.</p>\n\n<p>So the angle of the angle bisector is $\\psi = \\frac {\\theta + \\omega}2 = \\frac {\\arctan(m) + \\arctan(n)}2$</p>\n\n<p>And the slope of the angle bisector is $k = \\tan(\\psi) = \\tan(\\frac {\\arctan(m) + \\arctan(n)}2)=\\frac{m\\sqrt{1 + n^2}+n\\sqrt{1 + m^2}}{\\sqrt{1 + m^2}+\\sqrt{1 + n^2}}$</p>\n" } ]
Question: How do I find the slope of an angle bisector, given the equations of the two lines that form the angle? Answer 1: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 2: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 3: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 4: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 5: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 6: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 7: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 8: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 9: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 10: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 11: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 12: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 13: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 14: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 15: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 16: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 17: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 18: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 19: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 20: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 21: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 22: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 23: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 24: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 25: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 26: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 27: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 28: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 29: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 30: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 31: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 32: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 33: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 34: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 35: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 36: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 37: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 38: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 39: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 40: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 41: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 42: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 43: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 44: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 45: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 46: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 47: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 48: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 49: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 50: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 51: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 52: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 53: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 54: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 55: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 56: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 57: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 58: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 59: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 60: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 61: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 62: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 63: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 64: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 65: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 66: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 67: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 68: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 69: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 70: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> Answer 71: <p>If you have two lines </p> <p>$$ L_1 : a_1x+b_1y+c_2 = 0 $$</p> <p>and </p> <p>$$ L_2 : a_2x + b_2y + c_2 = 0, $$</p> <p>then the equation of their bisectors is given by</p> <p>$$ \frac{a_1x+b_1y+c_1}{\sqrt{a_1^2+b_1^2}} = \pm \frac{a_2x+b_2y+c_2}{\sqrt{a_2^2+b_2^2}}. $$</p> <p>where the $+$ and $-$ make the difference between the two bisectors. You just simplify each equation and read off the slope.</p> Answer 72: <p>The suggestion you got is completely wrong. In this particular case, you can just notice that lines whose slopes are reciprocals, are symmetric with respect to lines with slope $+1$ and $-1$.</p> Answer 73: <p>To be on right track, let $Ax + By + C =0, ax + by + c =0 $ be the equations of the given straight lines.The angular bisector is a straight line locus so that length of perpendiculars dropped from it onto the given lines are equal. The polar form of straight line is useful here.</p> <p>$$ \frac{Ax+By+C}{\sqrt{A^2+B^2}} = \pm \frac{ax+by+c}{\sqrt{a^2+b^2}}. $$</p> <p>The $+$ sign is taken when the arms contain the origin for internal bisector, $-$ sign for the external bisector perpendicular to it. You can now find its slope.</p> Answer 74: <p>Take the unitary vectors parallel to the lines: their sum will be parallel to one of the bisecting line, their difference will be parallel to the other one.<br> To determine which is the one bisecting the acute /obtuse angle, just take the dot product of the two vectors:</p> <ul> <li>if that is positive, then their sum will be parallel to the acute bisectrix and their difference to the obtuse one</li> <li>viceversa if the dot product is negative.</li> </ul> <p><em>note:</em> this method is valid also in 3D. In 2D you can apply the same method to the normal (instead than parallel) unitary vectors.</p> <p><strong>in your example</strong> (using parallel vectors) </p> <p><a href="https://i.stack.imgur.com/FO99t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FO99t.png" alt="enter image description here"></a></p> <p>Rewrite the line equations in the <em>proportional</em> form $$ \left\{ \begin{gathered} y = \frac{1} {2}x - 2\quad \Rightarrow \quad \frac{{x - 0}} {2} = \frac{{y - \left( { - 2} \right)}} {1} \hfill \\ y = 2x + 1\quad \Rightarrow \quad \frac{{x - 0}} {{1/2}} = \frac{{y - 1}} {1}\quad \Rightarrow \quad \frac{{x - 0}} {1} = \frac{{y - 1}} {2} \hfill \\ \end{gathered} \right. $$</p> <p>then the parallel unitary vectors are $$ \mathbf{u} = \frac{1} {{\sqrt 5 }}\left( {2,1} \right)\quad \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1,2} \right)\quad 0 &lt; \mathbf{u} \cdot \mathbf{v} = \frac{4} {5} $$ and the angle between them is acute. Their sum and difference is $$ \mathbf{u} + \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {3,3} \right)\quad \mathbf{u} - \mathbf{v} = \frac{1} {{\sqrt 5 }}\left( {1, - 1} \right) $$ and the equations of the bisecting lines will be: $$ \left\{ \begin{gathered} \frac{{x - x_{\,c} }} {3} = \frac{{y - y_{\,c} }} {3}\quad \Rightarrow \quad x + 2 = y + 3\quad \Rightarrow \quad y = x - 1\;\;acute \hfill \\ \frac{{x - x_{\,c} }} {1} = \frac{{y - y_{\,c} }} {{ - 1}}\quad \Rightarrow \quad x + 2 = - y - 3\quad \quad \Rightarrow \quad y = - x - 5\;\;obtuse \hfill \\ \end{gathered} \right. $$</p> Answer 75: <p>Line 1 is $y = \frac 12 x - 2$. Line 2 is $y = 2x +1$. They intersect at $(-2-3)$.</p> <p>If on line 1 you go over on $x$ $2$ units you will go up on $y$ by $1$ unit. This will put you at $(0, -2)$. The distance traveled is $\sqrt {1^2 + 2^2} = \sqrt{5}$.</p> <p>If on line 2 you go over on $x$ $1$ units you will go up on $y$ by $2$ unit. This will put you at $(-1, -1)$. The distance traveled is $\sqrt {2^2 + 1^2} = \sqrt{5}$.</p> <p>The angle bisector will go through the midpoint of $(0,-2)$ and $(-1,-1)$$*$. So the angle bisector will go through $(-\frac 12, -1\frac 12)$. So the angle bisector goes through the point $(-2,-3)$ and $(-\frac 12, -1\frac 12)$ so the slope is $\frac{-1\frac 12 - (-3)}{-1\frac 12 -(-2)} = \frac {1\frac 12}{1\frac 12} = 1$. </p> <p>$*$ because... $A = (-2,-3); B= (0,-2); C=(-1,-1);$ and $AB$ = $AC= \sqrt{5}$ so $\triangle BAC$ is isoceles, and the angle bisector of $\angle BAC$ passes through the midpoint of $BC$.</p> <p>==== details in general ====</p> <p>You friend is not quite right. You can average the <em>angles</em> but slopes are not angles and there is not a linear conversion between them. (There is a <em>trigonometric</em> conversion between them. But not a linear conversion.)</p> <p>Bear with me.</p> <p>Suppose the two lines intersect at $(u,v)$ and line $1$ has slope $m$ and line $2$ has slope $n$. </p> <p>Move along the line $1$ from $(u,v)$ a distance of $1$ unit. You will have move $\delta $ in the $x$ direction and $m*\delta $ in the $y$ direction so your total distance is $\sqrt{\delta^2 + m^2\delta^2} = 1$.</p> <p>So $\delta\sqrt{1 + m^2} = 1$ so $\delta = \frac 1{\sqrt{1 + m^2}}$.</p> <p>So the point on line $1$ that is one unit away form $(u, v)$ is the point $(x_1, y_1) = (u + \frac 1{\sqrt{1 + m^2}}, v + \frac m{\sqrt{1 + m^2}})$.</p> <p>Likewise the point on line $2$ that is one unit away from $(u,v)$ will be the point $(x_2, y_2) = (u + \frac 1{\sqrt{1 + n^2}}, v + \frac n{\sqrt{1 + n^2}})$</p> <p>The angle bisector will contain the midpoint of $(x_1, x_2)$ and $(x_2, y_2)$.</p> <p>The midpoint is $(x_m, y_m) = (u + \frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2, w + \frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2)$.</p> <p>So.... the slope of the angle bisector will be:</p> <p>$\frac {y_m - v}{x_m - u}= \frac{\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}2}{\frac{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}2}=$</p> <p>$\frac{[\frac m{\sqrt{1 + m^2}}]+[\frac n{\sqrt{1 + n^2}}]}{[\frac 1{\sqrt{1 + m^2}}]+[\frac 1{\sqrt{1 + n^2}}]}=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p> <p>$[\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}=\frac{\frac 12\sqrt{1 + 2^2}+2\sqrt{1 + \frac 12^2}}{\sqrt{1 + \frac 12^2}+\sqrt{1 + 2^2}}=\frac{\sqrt{5}/2+ \sqrt{5}}{\sqrt{5}+ \sqrt{5}/2}=1]$</p> <p>Which... I must confess is a formula I never learned and would never expect anyone to memorize. I'd expect if one needs to find the slope of an angle bisector one would calculate it for the specific lines.</p> <p>You might be able to simplify that equation further. </p> <p>=====</p> <p>That equation is sort of an "average"; just not a standard arithmetic average.</p> <p>You can calculate the average of the <em>angles</em> of the lines. But slopes are not angles and do not have a linear conversion.</p> <p>If you know trigonemetry:</p> <p>Slope = $\frac{rise}{run} = \frac{\sin \theta}{\cos \theta} = \tan \theta$ so the angle of a line is $\theta = \arctan m$.</p> <p>So the angle of the angle bisector is $\psi = \frac {\theta + \omega}2 = \frac {\arctan(m) + \arctan(n)}2$</p> <p>And the slope of the angle bisector is $k = \tan(\psi) = \tan(\frac {\arctan(m) + \arctan(n)}2)=\frac{m\sqrt{1 + n^2}+n\sqrt{1 + m^2}}{\sqrt{1 + m^2}+\sqrt{1 + n^2}}$</p>
A certain kind of set of integers
2,085,108
7
https://math.stackexchange.com/questions/2085108/a-certain-kind-of-set-of-integers
<p>Let us call a set $S$ of integers <em>sparse</em> if for any four integers $a,b,c,d \in S$ such that $a&lt;b&lt;c&lt;d$, we have that $a+d \ne b+c$. Define $P(n)$ to be maximal size of a sparse subset of $\{1,2,\dotsc,n\}$. Is it true that $\lim\limits_{n\to \infty} P(n)/n=0$? </p>
[ { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" }, { "answer_id": 2085158, "score": 6, "text": "<p>Yes, it's true. It follows from (the finitary version of) <a href=\"https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem\" rel=\"nofollow noreferrer\">Szemerédi's theorem</a>: if a set $A\\subseteq\\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href=\"http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf\" rel=\"nofollow noreferrer\">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a \"sparse\" set contains no arithmetic progression of length $4.$)</p>\n" }, { "answer_id": 2085235, "score": 4, "text": "<p>Here's an explicit, tighter bound for the size of a sparse set. If $A\\subseteq \\{1,2,\\ldots,n\\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\\sqrt{n})$, and hence $|A|/n$ is $O(1/\\sqrt{n})$.</p>\n" } ]
Question: A certain kind of set of integers Answer 1: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 2: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 3: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 4: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 5: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 6: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 7: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 8: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 9: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 10: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 11: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 12: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 13: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 14: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 15: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 16: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 17: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 18: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 19: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 20: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 21: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 22: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 23: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 24: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 25: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 26: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 27: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 28: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p> Answer 29: <p>Yes, it's true. It follows from (the finitary version of) <a href="https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s_theorem" rel="nofollow noreferrer">Szemerédi's theorem</a>: if a set $A\subseteq\mathbb N$ has positive upper density, then for every positive integer $k$ it contains an arithmetic progression of length $k;$ see <a href="http://matwbn.icm.edu.pl/ksiazki/aa/aa27/aa27132.pdf" rel="nofollow noreferrer">Endre Szemerédi, On sets of integers containing no $k$ elements in arithmetic progression, <em>Acta Arithmetica</em> <strong>27</strong> (1975), 199–245</a>. The case $k=4$ was proved earlier: Endre Szemerédi, On sets of integers containing no four elements in arithmetic progression, <em>Acta Math. Acad. Sci. Hungar.</em> <strong>20</strong> (1969), 89–104. (Of course, a "sparse" set contains no arithmetic progression of length $4.$)</p> Answer 30: <p>Here's an explicit, tighter bound for the size of a sparse set. If $A\subseteq \{1,2,\ldots,n\}$ is sparse, then every pair of distinct elements of $A$ has a distinct sum. There are $\Theta(|A|^2)$ pairs of distinct elements, but only $O(n)$ different sums that they can have. By the pigeonhole principle we conclude that $|A|$ is $O(\sqrt{n})$, and hence $|A|/n$ is $O(1/\sqrt{n})$.</p>
If $a_n = \frac{1}{2} (a_{n-1}+a_{n-2})$ prove $a_n=2+4 \left(\frac{-1}{2}\right)^n$ for $n \geq 2$
2,085,401
7
https://math.stackexchange.com/questions/2085401/if-a-n-frac12-a-n-1a-n-2-prove-a-n-24-left-frac-12-right
<blockquote> <blockquote> <p>Question: Let $a_1=0, a_2=3$, and, for all $n \geq 3$ let $a_n = \frac{1}{2} (a_{n-1}+a_{n-2})$ </p> <p>The sequence $(a_n)$ is said to be defined recursively.</p> <p>on $n$, show that, for all $n \geq 2$,</p> <p>$$a_n=2+4 \left(\frac{-1}{2}\right)^n$$</p> <p>And deduce that $(a_n) \rightarrow 2$</p> </blockquote> </blockquote> <p>My attempt:</p> <p>$ a_1 = 0 , a_2 = 3 , \ldots $</p> <p>$\forall n \geq 3$ let $a_n = \frac{1}{2}(a_{n-1} + a_{n-2})$</p> <p>Show inductively that $\forall n \geq 2$</p> <p>$ a_n = 2 + 4(- \frac{1}{2})^n $</p> <p>And deduce that $(a_n) \rightarrow 2$</p> <p>So $\ldots$</p> <p>let $n$ = $3$ then $a_3 = \frac{1}{2}(a_2 + a_{1}) = \frac{1}{2}(3 + 0) = \frac{3}{2}$</p> <p>This is equal to $2 + 4(- \frac{1}{2})^3$, so this holds.</p> <p>Assume that this holds up to $n = k$, then for $n = k + 1$;</p> <p>$a_{k + 1} = \frac{1}{2}(a_k + a_{k-1})$</p> <p>$a_k$ is known as $a_k = 2 + 4(- \frac{1}{2})^k$</p> <p>And $a_{k - 1}$ is also known as $a_{k - 1}= 2 + 4(- \frac{1}{2})^{k - 1}$</p> <p>so I then have</p> <p>$2a_{k+1} = (a_k + a_{k-1})$, and subbing in the values above for $a_k$ and $a_{k-1}$ gives</p> <p>$2a_{k+1} = ((2 + 4(- \frac{1}{2})^k) + (2 + 4(- \frac{1}{2})^{k - 1}))$</p> <p>Which can be simplified to</p> <p>$a_{k+1} = (1 + 2(- \frac{1}{2})^k + 1 + 2(- \frac{1}{2})^{k - 1})$</p> <p>and</p> <p>$a_{k+1} = 2 + 2(- \frac{1}{2})^k + 2(- \frac{1}{2})^{k - 1}$</p> <p>Using exponent laws for $ 2(- \frac{1}{2})^{k - 1}$ gives</p> <p>\begin{equation*} \begin{aligned} 2(- \frac{1}{2})^{k - 1} &amp;= 2(-\frac{1}{2})^{k} \times (- \frac{1}{2})^{-1} \\ &amp;= 2(-\frac{1}{2})^{k} \times (-2) \\ &amp;= -4(-\frac{1}{2})^{k} \\ \end{aligned} \end{equation*}</p> <p>Subbing this back into $a_{k+1} = 2 + 2(- \frac{1}{2})^k + 2(- \frac{1}{2})^{k - 1}$ as</p> <p>\begin{equation*} \begin{aligned} a_{k+1} &amp;= 2 + 2(- \frac{1}{2})^k -4(-\frac{1}{2})^{k} \\ &amp;= 2 - 2(- \frac{1}{2})^k \end{aligned} \end{equation*}</p> <p>If the induction holds then</p> <p>$ a_{k + 1} = 2 - 2(- \frac{1}{2})^k = 2 + 4(- \frac{1}{2})^{k + 1}$</p> <p>Using power laws on $ 4(- \frac{1}{2})^{k + 1}$ gives</p> <p>\begin{equation*} \begin{aligned} 4(- \frac{1}{2})^{k + 1} &amp;= 4(- \frac{1}{2})^{k } \times (- \frac{1}{2})^{1} \\ &amp;= 4(- \frac{1}{2})^{k } \times (- \frac{1}{2}) \\ &amp;= -2(- \frac{1}{2})^{k } \\ \end{aligned} \end{equation*}</p> <p>Using this result gives</p> <p>\begin{equation*} \begin{aligned} a_{k + 1} &amp;= 2 - 2(- \frac{1}{2})^k \\ &amp;= 2 + 4(- \frac{1}{2})^{k + 1} \\ &amp;= 2 -2(- \frac{1}{2})^{k } \\ \end{aligned} \end{equation*}</p> <p>As required.</p> <p>Therefore by the inductive hypothesis $\forall n \geq 2, a_n = 2 + 4(- \frac{1}{2})^n $</p> <p>Deduce that $(a_n) \rightarrow 2$</p> <p>If $(a_n) \rightarrow 2$ then as $n \rightarrow \infty, a_n \rightarrow 2$. So the limit of the sequence is $2$.</p> <p>Using the definition of convergence:</p> <p>$\forall \epsilon &gt; 0$ there exist natural numbers $N,n$ where $n&gt;N$ such that $|2 + 4(- \frac{1}{2})^n - 2| &lt; \epsilon$</p> <p>Which simplifies to $|4(- \frac{1}{2})^n| &lt; \epsilon$ and $4(\frac{1}{2})^n &lt; \epsilon$</p> <p>Then, given $\epsilon &gt; 0$ choose $N$ such that the minimum value of $N$ is $\log_2(\frac{4}{\epsilon})$ such that for all $n &gt; N, a_n &lt; \epsilon$</p> <p>This can be seen as</p> <p>\begin{equation*} \begin{aligned} N &amp;&gt; \log_2(\frac{4}{\epsilon}) \\ 2^N &amp;&gt; \frac{4}{\epsilon} \\ \frac{1}{2^N} &amp;&lt; \frac{\epsilon}{4} \\ \frac{4}{2^N} &amp;&lt; \epsilon \\ \end{aligned} \end{equation*}</p> <p>As required.</p> <p>This proves that $a_n \rightarrow 2$</p> <p>Not sure if this is correct but could someone please check and if there is an easier way of proving this let me know!</p>
[ { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" }, { "answer_id": 2085407, "score": 9, "text": "<p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\\,$: $$b_n = -\\,\\frac{1}{2} \\,b_{n-1}$$</p>\n\n<p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\\cdots$</p>\n" }, { "answer_id": 2085419, "score": 5, "text": "<p>By a well known theorem the general solution of $a_n = \\dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation:\n$$\\lambda^2=\\dfrac{1}{2}(\\lambda+1)=0\\Leftrightarrow \\ldots \\Leftrightarrow \\lambda=1\\vee\\lambda=-1/2,\\text{ so }a_n=C_1(1)^n+C_2\\left(-\\dfrac{1}{2}\\right)^n.$$ $$n=1\\Rightarrow 0=C_1-\\dfrac{1}{2}C_2,\\quad n=2\\Rightarrow 3=C_1+\\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\\;C_2=4$, so $$\\boxed{\\;a_n=2+4\\left(-\\dfrac{1}{2}\\right)^n\\;}$$</p>\n" } ]
Question: If $a_n = \frac{1}{2} (a_{n-1}+a_{n-2})$ prove $a_n=2+4 \left(\frac{-1}{2}\right)^n$ for $n \geq 2$ Answer 1: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 2: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 3: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 4: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 5: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 6: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 7: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 8: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 9: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 10: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 11: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 12: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 13: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 14: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 15: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 16: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 17: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 18: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 19: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 20: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 21: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 22: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 23: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 24: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 25: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 26: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 27: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 28: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p> Answer 29: <p>Hint: &nbsp;rewrite the recursion as $2(a_n - a_{n-1})=-(a_{n-1}-a_{n-2})$. With $b_n = a_n - a_{n-1}\,$: $$b_n = -\,\frac{1}{2} \,b_{n-1}$$</p> <p>Therefore $b_n$ is a geometric progression, and $a_n=b_n+a_{n-1}=b_n+b_{n-1}+a_{n-2}=\cdots$</p> Answer 30: <p>By a well known theorem the general solution of $a_n = \dfrac{1}{2}(a_{n-1} + a_{n-2})$ can be found by means of the characteristic equation: $$\lambda^2=\dfrac{1}{2}(\lambda+1)=0\Leftrightarrow \ldots \Leftrightarrow \lambda=1\vee\lambda=-1/2,\text{ so }a_n=C_1(1)^n+C_2\left(-\dfrac{1}{2}\right)^n.$$ $$n=1\Rightarrow 0=C_1-\dfrac{1}{2}C_2,\quad n=2\Rightarrow 3=C_1+\dfrac{1}{4}C_2.$$ Solving the system we get $C_1=2,\;C_2=4$, so $$\boxed{\;a_n=2+4\left(-\dfrac{1}{2}\right)^n\;}$$</p>
Is it possible to write $\tan^{-1}(x)$ as a power series of $\tanh(x)$?
2,085,742
7
https://math.stackexchange.com/questions/2085742/is-it-possible-to-write-tan-1x-as-a-power-series-of-tanhx
<p>$\tan^{-1}(x)$ looks very similar to $\tanh(x)$ if $x$ is small enough.</p> <p>Look.</p> <p><a href="https://i.stack.imgur.com/ZMy49.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZMy49.jpg" alt="plot 1"></a></p> <p>But they diverge from each other as $x$ grows.</p> <p><a href="https://i.stack.imgur.com/eOyQh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eOyQh.jpg" alt="plot2"></a></p> <p>And for very big $x$'s, They almost represent the constant functions $1$ and $\frac \pi 2$ (for $\tanh(x)$ and $\tan^{-1}(x)$, respectively).</p> <p><a href="https://i.stack.imgur.com/I8rOh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I8rOh.jpg" alt="plot3"></a></p> <p>Is it possible to write $\tan^{-1}(x)$ as a power expansion of $\tanh(x)$?</p> <p>I mean can we say this?</p> <p>$$\tan^{-1}(x)=\sum^{\infty}_{i=0} \alpha_i \tanh^i(x)$$</p> <p><hr> The power series is the thing I want. Not the resemblance between them.</p>
[ { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" }, { "answer_id": 2085777, "score": 2, "text": "<p>Let $u=\\tanh x \\iff \\tanh^{-1}u=x$. Then it is enough to expand $\\tan^{-1}\\tanh^{-1}u$ around $u=0$.</p>\n\n<p>You will find that $\\tan^{-1}\\tanh^{-1}u=u+\\frac{u^5}{15}+\\frac{u^7}{45}+\\frac{64u^9}{2835}+O(u^{11})$, and thus</p>\n\n<p>$$\\tan^{-1}x=\\tanh x+\\frac{(\\tanh x)^5}{15}+\\frac{(\\tanh x)^7}{45}+...$$</p>\n" }, { "answer_id": 2085790, "score": 0, "text": "<p>A lazy way would be linear regression $$\\tan^{-1}(x)=\\sum^{n}_{i=0} \\alpha_i \\tanh^{2i+1}(x)$$ and, for the range $-1 \\leq x \\leq 1$, you could get, using $y=\\tanh(x)$,$$\\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p>\n\n<p><strong>Edit</strong></p>\n\n<p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant\n$$\\begin{array}{clclclclc}\n \\text{} &amp; \\text{Estimate} &amp; \\text{Standard Error} &amp; \\text{Confidence Interval} \\\\\n a &amp; 0.999706 &amp; 0.000026 &amp; \\{0.999654,0.999758\\} \\\\\n b &amp; 0.006441 &amp; 0.000306 &amp; \\{0.005836,0.007045\\} \\\\\n c &amp; 0.030293 &amp; 0.001044 &amp; \\{0.028233,0.032352\\} \\\\\n d &amp; 0.089776 &amp; 0.001061 &amp; \\{0.087684,0.091867\\} \\\\\n\\end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>\n" } ]
Question: Is it possible to write $\tan^{-1}(x)$ as a power series of $\tanh(x)$? Answer 1: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 2: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 3: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 4: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 5: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 6: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 7: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 8: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 9: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 10: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 11: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 12: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 13: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 14: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 15: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 16: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 17: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 18: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 19: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 20: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 21: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 22: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 23: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 24: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 25: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 26: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 27: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 28: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p> Answer 29: <p>Let $u=\tanh x \iff \tanh^{-1}u=x$. Then it is enough to expand $\tan^{-1}\tanh^{-1}u$ around $u=0$.</p> <p>You will find that $\tan^{-1}\tanh^{-1}u=u+\frac{u^5}{15}+\frac{u^7}{45}+\frac{64u^9}{2835}+O(u^{11})$, and thus</p> <p>$$\tan^{-1}x=\tanh x+\frac{(\tanh x)^5}{15}+\frac{(\tanh x)^7}{45}+...$$</p> Answer 30: <p>A lazy way would be linear regression $$\tan^{-1}(x)=\sum^{n}_{i=0} \alpha_i \tanh^{2i+1}(x)$$ and, for the range $-1 \leq x \leq 1$, you could get, using $y=\tanh(x)$,$$\tan^{-1}(x)=0.990525 y+0.062262 y^3$$ $$\tan^{-1}(x)=1.00139 y-0.0178282 y^3+0.117721 y^5$$ $$\tan^{-1}(x)=0.999706 y+0.00644068 y^3+0.0302925 y^5+0.0897757 y^7$$</p> <p><strong>Edit</strong></p> <p>In Will Sherwood's good solution, the cubic term is missing (and this is correct based on the used approach). However performing the last given linear regression, the corresponding term is highly significant $$\begin{array}{clclclclc} \text{} &amp; \text{Estimate} &amp; \text{Standard Error} &amp; \text{Confidence Interval} \\ a &amp; 0.999706 &amp; 0.000026 &amp; \{0.999654,0.999758\} \\ b &amp; 0.006441 &amp; 0.000306 &amp; \{0.005836,0.007045\} \\ c &amp; 0.030293 &amp; 0.001044 &amp; \{0.028233,0.032352\} \\ d &amp; 0.089776 &amp; 0.001061 &amp; \{0.087684,0.091867\} \\ \end{array}$$ and, over the considered range, the curve fit is better than the theoretical septic expansion as soon as $x &gt;0.5$.</p>
Continuous with compact support implies uniform continuity
2,086,124
7
https://math.stackexchange.com/questions/2086124/continuous-with-compact-support-implies-uniform-continuity
<p>This might be a duplicate but I tried googling the MSE site and could not find a satisfactory answer.</p> <p>Let $(X, d)$ be a metric space and $f$ be a real valued continuous function on $X$. Suppose $f$ has a compact support. Does this imply the uniform continuity of $f$? </p> <p>I tried proving this statement, but only for locally connected spaces have I succeeded in doing so. Is thus true for general metric spaces? I just couldn't provide a proof (or a counterexample) by myself. Please enlighten me.</p>
[ { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" }, { "answer_id": 2086199, "score": 7, "text": "<p>Let $f:X\\to\\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \\setminus K}=0$.</p>\n\n<p>Let's show $f$ is uniformly continuous. Let $\\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p>\n\n<p>$$\\exists \\delta_1&gt;0 \\; \\forall (x, y) \\in K^2 \\; d(x,y) &lt; \\delta_1 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>If $(x, y) \\in (X\\setminus K)^2$, $\\delta_1$ works.</p>\n\n<p>Now suppose by contradiction that there exists $\\epsilon&gt;0$ s.t. for all $n \\in \\mathbf{N}$, there exists $x_n \\in K$, $y_n \\in X\\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \\epsilon$. $(x_n)$ lies in a compact space, so there exists $\\sigma$ strictly increasing and $x \\in K$ s.t. $x_{\\sigma(n)} \\to x$. Then $y_{\\sigma(n)} \\to x$, but $|f(x_{\\sigma(n)}) -f(y_{\\sigma(n)})| &gt; \\epsilon$, thus $f$ is not continuous in $x$, contradiction.</p>\n\n<p>Hence:</p>\n\n<p>$$\\exists \\delta_2&gt;0 \\; \\forall (x, y) \\in K \\times (X \\setminus K) \\; d(x,y) &lt; \\delta_2 \\implies |f(x)-f(y)| &lt; \\epsilon$$</p>\n\n<p>Finally $\\delta = \\min(\\delta_1, \\delta_2)$ works.</p>\n" }, { "answer_id": 2086325, "score": 0, "text": "<p>Let $X_1=X$ \\ $f^{-1}\\{0\\}.$</p>\n\n<p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\\forall x\\in \\overline {X_1}\\;(|f(x)|\\geq r\\implies B_d(x,s)\\subset \\overline {X_1}).$ </p>\n\n<p>If not, then for $n\\in \\mathbb N,$ take $x_n\\in \\overline {X_1}$ and $y_n\\in X$ \\ $\\overline {X_1}$ with $|f(x_n)|\\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\\in \\overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p>\n\n<p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\\cup \\{B_d(x,s): |f(x)|\\geq r \\} \\subset \\overline {X_1}.$$ Now $f|_{\\overline {X_1}}$ is uniformly continuous because $\\overline {X_1}$ is compact. So Take $s'\\in (0,s)$ such that $$\\forall x,y\\in \\overline {X_1}\\;(d(x,y)&lt;s'\\implies |f(x)-f(y)|&lt;r).$$ Now if $x\\in \\overline {X_1}$ and $y\\in X$ \\ $\\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \\in X$ \\ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>\n" } ]
Question: Continuous with compact support implies uniform continuity Answer 1: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 2: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 3: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 4: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 5: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 6: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 7: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 8: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 9: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 10: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 11: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 12: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 13: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 14: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 15: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 16: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 17: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 18: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 19: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 20: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 21: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 22: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 23: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 24: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 25: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 26: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 27: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 28: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p> Answer 29: <p>Let $f:X\to\mathbf{R}$ satisfying your hypotheses. There exists $K$ compact s.t. $f_{|X \setminus K}=0$.</p> <p>Let's show $f$ is uniformly continuous. Let $\epsilon&gt;0$. By Heine theorem applied to $f_{|K}$, $f$ is uniformly continuous on $K$:</p> <p>$$\exists \delta_1&gt;0 \; \forall (x, y) \in K^2 \; d(x,y) &lt; \delta_1 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>If $(x, y) \in (X\setminus K)^2$, $\delta_1$ works.</p> <p>Now suppose by contradiction that there exists $\epsilon&gt;0$ s.t. for all $n \in \mathbf{N}$, there exists $x_n \in K$, $y_n \in X\setminus K$ s.t. $d(x_n, y_n) &lt; 2^{-n}$ and $|f(x_n) - f(y_n)| &gt; \epsilon$. $(x_n)$ lies in a compact space, so there exists $\sigma$ strictly increasing and $x \in K$ s.t. $x_{\sigma(n)} \to x$. Then $y_{\sigma(n)} \to x$, but $|f(x_{\sigma(n)}) -f(y_{\sigma(n)})| &gt; \epsilon$, thus $f$ is not continuous in $x$, contradiction.</p> <p>Hence:</p> <p>$$\exists \delta_2&gt;0 \; \forall (x, y) \in K \times (X \setminus K) \; d(x,y) &lt; \delta_2 \implies |f(x)-f(y)| &lt; \epsilon$$</p> <p>Finally $\delta = \min(\delta_1, \delta_2)$ works.</p> Answer 30: <p>Let $X_1=X$ \ $f^{-1}\{0\}.$</p> <p>(1). For $r&gt;0$ there exists $s&gt;0$ such that $\forall x\in \overline {X_1}\;(|f(x)|\geq r\implies B_d(x,s)\subset \overline {X_1}).$ </p> <p>If not, then for $n\in \mathbb N,$ take $x_n\in \overline {X_1}$ and $y_n\in X$ \ $\overline {X_1}$ with $|f(x_n)|\geq r$ and $d(x_n,y_n)&lt;1/n.$ By compactness of $\overline {X_1}$, take a convergent subsequence $(x_{n_i})_i$ converging to $p\in \overline {X_1}.$ Then $(f(x_{n_i}))_i$ converges to $f(p),$ so $|f(p)|\geq r.$ But $(y_{n_i})_i$ also converges to $p,$ and $f(y_{n_i})=0,$ contradicting the continuity of $f.$</p> <p>(2). For $r&gt;0,$ take $s&gt;0$ such that $$\cup \{B_d(x,s): |f(x)|\geq r \} \subset \overline {X_1}.$$ Now $f|_{\overline {X_1}}$ is uniformly continuous because $\overline {X_1}$ is compact. So Take $s'\in (0,s)$ such that $$\forall x,y\in \overline {X_1}\;(d(x,y)&lt;s'\implies |f(x)-f(y)|&lt;r).$$ Now if $x\in \overline {X_1}$ and $y\in X$ \ $\overline {X_1}$ with $d(x,y)&lt;s'&lt;s$ then $|f(x)|&lt;r$, so $|f(x)-f(y)|=|f(y)-0|&lt;r.$ And of course if $x,y \in X$ \ $X_1$ then $|f(x)-f(y)|=0&lt;r.$ </p>
Understanding the proof of $\displaystyle\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=1$
2,086,600
7
https://math.stackexchange.com/questions/2086600/understanding-the-proof-of-displaystyle-lim-n-to-infty-sqrt-n-frac1n
<p>I need help understanding this:</p> <p>Prove that $$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=1$$</p> <blockquote> <p>$$(\sqrt n)^\frac{1}{n}\geq (\sqrt1)^\frac{1}{n}=1, \forall n\in\Bbb N$$ By binomial theorem we get for $n\geq 2$</p> <p>$$n=((\sqrt n)^\frac{1}{n})^n=[1+((\sqrt n)^\frac{1}{n}-1)]^n=\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k$$ $$\geq 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2=1+\frac{n(n-1)}{2}(\sqrt n^\frac{1}{n}-1)^2$$ $$\Rightarrow (\sqrt n^\frac{1}{n}-1)^2\leq \frac{2}{n}$$ $$\Rightarrow \sqrt n^\frac{1}{n}\leq 1+\frac{\sqrt 2}{\sqrt n}$$ $1+\frac{\sqrt 2}{\sqrt n}$ approaches $1$ for $n \rightarrow \infty$ so by the sandwich theorem we get $\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=1$</p> </blockquote> <p>I don't understand this part: $$ \sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\geq 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2.$$ Could someone explain how they got that on the RHS?</p>
[ { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" }, { "answer_id": 2086619, "score": 3, "text": "<p>Since $n^{1/n}\\ge 1$, then all of the terms in the binomial expansion</p>\n\n<p>$$\\left(1+(\\sqrt[n]{n}-1)\\right)^n=\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>are non-negative. Therefore, we have for any $m\\le n$</p>\n\n<p>$$\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\ge \\sum_{k=0}^m\\binom{n}{k}(\\sqrt[n]{n}-1)^k$$</p>\n\n<p>Taking $m=2$ and $n\\ge 2$ yields</p>\n\n<p>$$\\begin{align}\n\\sum_{k=0}^n\\binom{n}{k}(\\sqrt[n]{n}-1)^k&amp;\\ge \\sum_{k=0}^2\\binom{n}{k}(\\sqrt[n]{n}-1)^k\\\\\\\\\n&amp;=1+n(\\sqrt[n]{n}-1)+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\\\\\\\\\n&amp;\\ge 1+\\frac{n(n-1)}{2}(\\sqrt[n]{n}-1)^2\n\\end{align}$$</p>\n\n<p>where the last inequality is true since $n(\\sqrt[n]{n}-1)\\ge 0$.</p>\n" }, { "answer_id": 2086800, "score": 3, "text": "<p>They are truncating the binomial expansion, discarding positive terms: if\n$$\nr=r_0+r_1+r_2+\\dots+r_n\n$$\nand $r_i\\ge0$ for $i=0,1,\\dots,n$, then $r\\ge r_0+r_2$.</p>\n\n<hr>\n\n<p>It can be done more easily. Set $(\\sqrt{n})^{1/n}=1+a_n$, so\n$$\n\\sqrt{n}=(1+a_n)^{n}\\ge1+na_n\n$$\nby Bernoulli’s inequality. Hence\n$$\na_n\\le\\frac{\\sqrt{n}-1}{n}=\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nso\n$$\n(\\sqrt{n})^{1/n}=1+a_n\\le 1+\\frac{1}{\\sqrt{n}}-\\frac{1}{n}\n$$\nand the squeeze theorem allows to conclude.</p>\n" }, { "answer_id": 2086676, "score": 2, "text": "<p>Set $l =\\lim_{n\\to\\infty}n^\\frac{1}{n}$</p>\n\n<p>$$\\lim_{n\\to\\infty}(\\sqrt n)^\\frac{1}{n}=\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{l}$$</p>\n\n<p>For the limit of the sequence $n^\\frac{1}{n}$ get the sub-sequence $(2n)^\\frac{1}{2n}$</p>\n\n<p>So: </p>\n\n<p>$l^2=(\\lim (2n)^\\frac{1}{2n})^2=\\lim (2n)^\\frac{1}{n}=\\lim 2^\\frac{1}{n}. n^\\frac{1}{n}=\\lim 2^\\frac{1}{n}.\\lim n^\\frac{1}{n}=l$ </p>\n\n<p>$l^2=l$ and as $n^\\frac{1}{n}\\gt1$ we have $l=1$</p>\n\n<p>And $\\sqrt{\\lim_{n\\to\\infty}n^\\frac{1}{n}}=\\sqrt{1}=1$</p>\n" }, { "answer_id": 2086722, "score": 2, "text": "<p>That proof seems a little complicated; I'll do it a different way. Let's write $(\\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\\ge 2,$ the binomial theorem shows</p>\n\n<p>$$\\sqrt n= (1+a_n)^{n} = 1^n\\cdot a_n^0 + n\\cdot1^{n-1}\\cdot a_n^1 + [n(n-1)/2]\\cdot 1^{n-2}\\cdot a_n^2 + \\cdots &gt; [n(n-1)/2]a_n^2.$$</p>\n\n<p>Therefore </p>\n\n<p>$$\\frac{\\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p>\n\n<p>Since the expression on the left $\\to 0,a_n^2 \\to 0,$ which implies $a_n \\to 0.$ It follows that $(\\sqrt n)^{1/n} \\to 1.$</p>\n" }, { "answer_id": 2086792, "score": 2, "text": "<p><strong>A Different Approach</strong></p>\n\n<p>Because <a href=\"https://math.stackexchange.com/a/306245\">$\\left(1+\\frac1n\\right)^{n+1}$ is decreasing</a>, for $n\\ge3$,\n$$\n\\begin{align}\n\\frac{(n+1)^n}{n^{n+1}}\n&amp;=\\frac1{n+1}\\left(1+\\frac1n\\right)^{n+1}\\\\\n&amp;\\le\\frac14\\cdot\\frac{256}{81}\\\\[6pt]\n&amp;\\lt1\n\\end{align}\n$$\nThus, for $n\\ge3$,\n$$\n(n+1)^{\\frac1{n+1}}\\lt n^{\\frac1n}\n$$\nThat is, $n^{\\frac1n}$ is decreasing and bounded below by $1$. So $\\lim\\limits_{n\\to\\infty}n^{\\frac1n}$ exists.\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1n}\n&amp;=\\lim_{n\\to\\infty}(2n)^{\\frac1{2n}}\\\\\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\lim_{n\\to\\infty}n^{\\frac1{2n}}\\\\\n\\end{align}\n$$\nTherefore,\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}n^{\\frac1{2n}}\n&amp;=\\lim_{n\\to\\infty}2^{\\frac1{2n}}\\\\[6pt]\n&amp;=1\n\\end{align}\n$$</p>\n\n<hr>\n\n<p><strong>Bernoulli's Inequality</strong></p>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality\" rel=\"nofollow noreferrer\">Bernoulli's Inequality</a> says\n$$\n\\begin{align}\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}\n&amp;=\\lim_{n\\to\\infty}\\left(1+\\left(\\sqrt{n}-1\\right)\\right)^{1/n}\\\\[3pt]\n&amp;\\le\\lim_{n\\to\\infty}\\left(1+\\tfrac1n\\left(\\sqrt{n}-1\\right)\\right)\\\\[6pt]\n&amp;=1\n\\end{align}\n$$\nSince $\\sqrt{n}^{1/n}\\ge1$, the <a href=\"https://en.wikipedia.org/wiki/Squeeze_theorem\" rel=\"nofollow noreferrer\">Squeeze Theorem</a> says\n$$\n\\lim_{n\\to\\infty}\\sqrt{n}^{1/n}=1\n$$</p>\n" }, { "answer_id": 2086607, "score": 1, "text": "<p>$$\\sum_{k=0}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k= 1+{{n}\\choose {2}}(\\sqrt n^\\frac{1}{n}-1)^2+\\left[{{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k\\right]$$</p>\n\n<p>and see that:</p>\n\n<p>$${{n}\\choose {1}}(\\sqrt n^\\frac{1}{n}-1)^1+\\sum_{k=3}^{n}{{n}\\choose{k}}1^{n-k}(\\sqrt n^\\frac{1}{n}-1)^k \\ge 0$$</p>\n\n<p>because </p>\n\n<p>$$\\sqrt{n}^\\frac{1}{n}-1 \\ge0$$</p>\n" }, { "answer_id": 2086740, "score": 1, "text": "<p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p>\n\n<p>$$n-1=(n^{\\frac{1}{2n}})^{2n}-1=\\\\(n^{\\frac{1}{2n}}-1)(n^{\\frac{1}{2n}(2n-1)}+n^{\\frac{1}{2n}(2n-2)}+...+n^{\\frac{1}{2n}2}+n^{\\frac{1}{2n}}+1)\\geq$$\nusing AM-GM\n$$\\geq(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1)}\\cdot n^{\\frac{1}{2n}(2n-2)}\\cdot ...\\cdot n^{\\frac{1}{2n}2}\\cdot n^{\\frac{1}{2n}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\\right)=\n\\\\(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt[2n-1]{n^{\\frac{1}{2n}\\frac{(2n-1)2n}{2}}}+1\\right)=\\\\\n(n^{\\frac{1}{2n}}-1)\\left((2n-1)\\sqrt{n}+1\\right)$$\n<a href=\"https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents\" rel=\"nofollow noreferrer\">Also $\\ln{x}\\leq x-1,x&gt;0$</a> which means $$\\ln{n^{\\frac{1}{2n}}}=\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1$$</p>\n\n<p>Altogether \n$$\\frac{\\ln{n}}{2n}\\leq n^{\\frac{1}{2n}}-1\\leq \\frac{n-1}{(2n-1)\\sqrt{n}+1}&lt;\\frac{1}{2\\sqrt{n}}$$</p>\n" }, { "answer_id": 4542761, "score": 0, "text": "<p>Let <span class=\"math-container\">$\\lim_{n\\to\\infty} n^{1/2n} = L$</span></p>\n<p>Taking logarithms, we get ;</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}$$log(n)$</span></p>\n<p>Applying <span class=\"math-container\">$LH$</span> Rule :</p>\n<p><span class=\"math-container\">$log(L) =$$\\lim_{n\\to\\infty}$$\\frac{1}{2n}=0$</span></p>\n<p>Hence, <span class=\"math-container\">$L=e^0=1$</span></p>\n" } ]
Question: Understanding the proof of $\displaystyle\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=1$ Answer 1: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 2: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 3: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 4: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 5: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 6: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 7: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 8: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 9: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 10: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 11: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 12: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 13: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 14: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 15: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 16: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 17: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 18: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 19: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 20: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 21: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 22: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 23: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 24: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 25: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 26: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 27: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 28: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 29: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 30: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 31: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 32: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 33: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 34: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 35: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 36: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 37: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 38: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 39: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 40: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 41: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 42: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 43: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 44: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 45: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 46: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 47: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 48: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 49: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 50: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 51: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 52: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 53: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 54: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 55: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 56: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 57: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 58: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 59: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 60: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 61: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 62: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 63: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 64: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 65: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 66: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 67: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 68: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 69: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 70: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 71: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 72: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 73: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 74: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 75: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 76: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 77: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 78: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 79: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 80: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 81: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 82: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 83: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 84: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 85: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 86: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 87: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 88: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 89: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 90: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 91: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 92: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 93: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 94: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 95: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 96: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 97: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 98: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 99: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 100: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 101: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 102: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 103: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 104: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 105: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 106: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 107: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 108: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 109: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 110: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 111: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 112: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p> Answer 113: <p>Since $n^{1/n}\ge 1$, then all of the terms in the binomial expansion</p> <p>$$\left(1+(\sqrt[n]{n}-1)\right)^n=\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>are non-negative. Therefore, we have for any $m\le n$</p> <p>$$\sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k\ge \sum_{k=0}^m\binom{n}{k}(\sqrt[n]{n}-1)^k$$</p> <p>Taking $m=2$ and $n\ge 2$ yields</p> <p>$$\begin{align} \sum_{k=0}^n\binom{n}{k}(\sqrt[n]{n}-1)^k&amp;\ge \sum_{k=0}^2\binom{n}{k}(\sqrt[n]{n}-1)^k\\\\ &amp;=1+n(\sqrt[n]{n}-1)+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2\\\\ &amp;\ge 1+\frac{n(n-1)}{2}(\sqrt[n]{n}-1)^2 \end{align}$$</p> <p>where the last inequality is true since $n(\sqrt[n]{n}-1)\ge 0$.</p> Answer 114: <p>They are truncating the binomial expansion, discarding positive terms: if $$ r=r_0+r_1+r_2+\dots+r_n $$ and $r_i\ge0$ for $i=0,1,\dots,n$, then $r\ge r_0+r_2$.</p> <hr> <p>It can be done more easily. Set $(\sqrt{n})^{1/n}=1+a_n$, so $$ \sqrt{n}=(1+a_n)^{n}\ge1+na_n $$ by Bernoulli’s inequality. Hence $$ a_n\le\frac{\sqrt{n}-1}{n}=\frac{1}{\sqrt{n}}-\frac{1}{n} $$ so $$ (\sqrt{n})^{1/n}=1+a_n\le 1+\frac{1}{\sqrt{n}}-\frac{1}{n} $$ and the squeeze theorem allows to conclude.</p> Answer 115: <p>Set $l =\lim_{n\to\infty}n^\frac{1}{n}$</p> <p>$$\lim_{n\to\infty}(\sqrt n)^\frac{1}{n}=\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{l}$$</p> <p>For the limit of the sequence $n^\frac{1}{n}$ get the sub-sequence $(2n)^\frac{1}{2n}$</p> <p>So: </p> <p>$l^2=(\lim (2n)^\frac{1}{2n})^2=\lim (2n)^\frac{1}{n}=\lim 2^\frac{1}{n}. n^\frac{1}{n}=\lim 2^\frac{1}{n}.\lim n^\frac{1}{n}=l$ </p> <p>$l^2=l$ and as $n^\frac{1}{n}\gt1$ we have $l=1$</p> <p>And $\sqrt{\lim_{n\to\infty}n^\frac{1}{n}}=\sqrt{1}=1$</p> Answer 116: <p>That proof seems a little complicated; I'll do it a different way. Let's write $(\sqrt n)^{1/n} = 1+a_n.$ We know each $a_n&gt;0.$ For $n\ge 2,$ the binomial theorem shows</p> <p>$$\sqrt n= (1+a_n)^{n} = 1^n\cdot a_n^0 + n\cdot1^{n-1}\cdot a_n^1 + [n(n-1)/2]\cdot 1^{n-2}\cdot a_n^2 + \cdots &gt; [n(n-1)/2]a_n^2.$$</p> <p>Therefore </p> <p>$$\frac{\sqrt n}{n(n-1)/2} &gt; a_n^2.$$</p> <p>Since the expression on the left $\to 0,a_n^2 \to 0,$ which implies $a_n \to 0.$ It follows that $(\sqrt n)^{1/n} \to 1.$</p> Answer 117: <p><strong>A Different Approach</strong></p> <p>Because <a href="https://math.stackexchange.com/a/306245">$\left(1+\frac1n\right)^{n+1}$ is decreasing</a>, for $n\ge3$, $$ \begin{align} \frac{(n+1)^n}{n^{n+1}} &amp;=\frac1{n+1}\left(1+\frac1n\right)^{n+1}\\ &amp;\le\frac14\cdot\frac{256}{81}\\[6pt] &amp;\lt1 \end{align} $$ Thus, for $n\ge3$, $$ (n+1)^{\frac1{n+1}}\lt n^{\frac1n} $$ That is, $n^{\frac1n}$ is decreasing and bounded below by $1$. So $\lim\limits_{n\to\infty}n^{\frac1n}$ exists. $$ \begin{align} \lim_{n\to\infty}n^{\frac1n} &amp;=\lim_{n\to\infty}(2n)^{\frac1{2n}}\\ &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\lim_{n\to\infty}n^{\frac1{2n}}\\ \end{align} $$ Therefore, $$ \begin{align} \lim_{n\to\infty}n^{\frac1{2n}} &amp;=\lim_{n\to\infty}2^{\frac1{2n}}\\[6pt] &amp;=1 \end{align} $$</p> <hr> <p><strong>Bernoulli's Inequality</strong></p> <p><a href="https://en.wikipedia.org/wiki/Bernoulli&#39;s_inequality" rel="nofollow noreferrer">Bernoulli's Inequality</a> says $$ \begin{align} \lim_{n\to\infty}\sqrt{n}^{1/n} &amp;=\lim_{n\to\infty}\left(1+\left(\sqrt{n}-1\right)\right)^{1/n}\\[3pt] &amp;\le\lim_{n\to\infty}\left(1+\tfrac1n\left(\sqrt{n}-1\right)\right)\\[6pt] &amp;=1 \end{align} $$ Since $\sqrt{n}^{1/n}\ge1$, the <a href="https://en.wikipedia.org/wiki/Squeeze_theorem" rel="nofollow noreferrer">Squeeze Theorem</a> says $$ \lim_{n\to\infty}\sqrt{n}^{1/n}=1 $$</p> Answer 118: <p>$$\sum_{k=0}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k= 1+{{n}\choose {2}}(\sqrt n^\frac{1}{n}-1)^2+\left[{{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k\right]$$</p> <p>and see that:</p> <p>$${{n}\choose {1}}(\sqrt n^\frac{1}{n}-1)^1+\sum_{k=3}^{n}{{n}\choose{k}}1^{n-k}(\sqrt n^\frac{1}{n}-1)^k \ge 0$$</p> <p>because </p> <p>$$\sqrt{n}^\frac{1}{n}-1 \ge0$$</p> Answer 119: <p>I'd like to offer another method, using $x^n-1=(x-1)(x^{n-1}+x^{n-2}+...+x^{2}+x+1)$</p> <p>$$n-1=(n^{\frac{1}{2n}})^{2n}-1=\\(n^{\frac{1}{2n}}-1)(n^{\frac{1}{2n}(2n-1)}+n^{\frac{1}{2n}(2n-2)}+...+n^{\frac{1}{2n}2}+n^{\frac{1}{2n}}+1)\geq$$ using AM-GM $$\geq(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1)}\cdot n^{\frac{1}{2n}(2n-2)}\cdot ...\cdot n^{\frac{1}{2n}2}\cdot n^{\frac{1}{2n}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}(2n-1+2n-2+...+1)}}+1\right)= \\(n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt[2n-1]{n^{\frac{1}{2n}\frac{(2n-1)2n}{2}}}+1\right)=\\ (n^{\frac{1}{2n}}-1)\left((2n-1)\sqrt{n}+1\right)$$ <a href="https://www.jstor.org/stable/3615890?seq=1#page_scan_tab_contents" rel="nofollow noreferrer">Also $\ln{x}\leq x-1,x&gt;0$</a> which means $$\ln{n^{\frac{1}{2n}}}=\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1$$</p> <p>Altogether $$\frac{\ln{n}}{2n}\leq n^{\frac{1}{2n}}-1\leq \frac{n-1}{(2n-1)\sqrt{n}+1}&lt;\frac{1}{2\sqrt{n}}$$</p> Answer 120: <p>Let <span class="math-container">$\lim_{n\to\infty} n^{1/2n} = L$</span></p> <p>Taking logarithms, we get ;</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}$$log(n)$</span></p> <p>Applying <span class="math-container">$LH$</span> Rule :</p> <p><span class="math-container">$log(L) =$$\lim_{n\to\infty}$$\frac{1}{2n}=0$</span></p> <p>Hence, <span class="math-container">$L=e^0=1$</span></p>
Intuitive explanation as to why the complex numbers on a complex plane split when taking the root?
2,086,963
7
https://math.stackexchange.com/questions/2086963/intuitive-explanation-as-to-why-the-complex-numbers-on-a-complex-plane-split-whe
<p>When taking the root of a complex number, it splits into several points on the complex plane. I can see how this comes about mathematically, but this is hard for me to understand intuitively. </p> <p>Is this just a convenient coincidence? <a href="https://i.stack.imgur.com/jDgQu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jDgQu.png" alt="enter image description here"></a></p>
[ { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" }, { "answer_id": 2086973, "score": 7, "text": "<p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\\theta$, the modulus has to be $\\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\\theta$ radians.</p>\n\n<p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an \"$n$-th root\" just means \"something that when multiplied by itself $n$ times produces a given result\". Why, intuitively, would there only be one of those?</p>\n" }, { "answer_id": 2086971, "score": 3, "text": "<p>It may be easier to first rationalize that the complex $n^{th}$ <a href=\"https://en.wikipedia.org/wiki/Root_of_unity\" rel=\"nofollow noreferrer\">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p>\n\n<p>Then, let $\\omega$ be a primitive $n^{th}$ root of unity so that $\\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\\omega^kz_0)^n=(\\omega^{n})^kz_0^n = 1^k \\cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\\{z_0, \\omega z_0, \\omega^2 z_0, \\dots, \\omega^{n-1}z_0\\}$ which are the vertices of a regular $n$-gon centered on the origin and \"anchored\" at $z_0\\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\\{1,\\omega,\\omega^2,\\dots,\\omega^{n-1}\\}$ scaled by $|z_0|=\\sqrt[n]{|z|}$ and rotated by $\\arg(z_0)\\,$.</p>\n" }, { "answer_id": 2086977, "score": 2, "text": "<p>I am not sure what you mean by understanding it intuitively..<br>\nConsider $z = r(\\cos\\theta + i\\sin\\theta)$ then the roots are \n$$ z_k = r^{\\frac{1}{n}}(\\cos\\frac{\\theta + 2\\pi k}{n} + i\\sin\\frac{\\theta + 2\\pi k}{n}),\\ \\ k = 0, \\ 1,\\ldots,\\ n-1 $$</p>\n\n<p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p>\n\n<p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\\theta_1},\\ z_2=r_2e^{i\\theta_2}$, you get $z=r_1r_2e^{i(\\theta_1+\\theta_2)}$<br>\nThis effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p>\n\n<p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\\frac{\\theta + 2\\pi k}{n}$ so rotating it $n$ times by the same angle gives $\\theta + 2\\pi k$</p>\n\n<p>Does that make sense?</p>\n" }, { "answer_id": 2086964, "score": 1, "text": "<p>For complex numbers, when you say to take the \"root\", you are referring to all solutions to the following problem:</p>\n\n<p>$$x^n=z$$</p>\n\n<p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p>\n\n<p>$$1^2=(-1)^2=1$$</p>\n\n<p>$$x=\\begin{cases}+1\\\\-1\\end{cases}$$</p>\n\n<p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>\n" } ]
Question: Intuitive explanation as to why the complex numbers on a complex plane split when taking the root? Answer 1: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 2: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 3: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 4: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 5: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 6: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 7: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 8: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 9: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 10: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 11: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 12: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 13: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 14: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 15: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 16: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 17: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 18: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 19: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 20: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 21: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 22: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 23: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 24: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 25: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 26: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 27: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 28: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 29: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 30: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 31: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 32: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 33: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 34: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 35: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 36: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 37: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 38: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 39: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 40: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 41: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 42: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 43: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 44: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 45: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 46: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 47: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 48: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 49: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 50: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 51: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 52: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 53: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 54: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 55: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 56: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p> Answer 57: <p>The rule for multiplication of complex numbers is to add the arguments and multiply the length. That is, multiplying by a complex number corresponds to a rotation combined with a scaling operation. Thus, if you want to multiply a complex number $n$ times by itself and produce a given modulus $r$ and a given angle $\theta$, the modulus has to be $\sqrt[n] r$, but there are actually $n$ different angles such that rotating by them $n$ times produces a rotation of $\theta$ radians.</p> <p>But really, I'm not sure there's even anything to explain. If you have a bunch of things that you can multiply together, then an "$n$-th root" just means "something that when multiplied by itself $n$ times produces a given result". Why, intuitively, would there only be one of those?</p> Answer 58: <p>It may be easier to first rationalize that the complex $n^{th}$ <a href="https://en.wikipedia.org/wiki/Root_of_unity" rel="nofollow noreferrer">roots of unity</a> are the vertices of a regular $n$-gon inscribed in the unit circle.</p> <p>Then, let $\omega$ be a primitive $n^{th}$ root of unity so that $\omega^n=1$. If $z_0$ is any of the $n^{th}$ roots of $z$, it follows that $w^k z_0$ is also an $n^{th}$ root of $z$, since $(\omega^kz_0)^n=(\omega^{n})^kz_0^n = 1^k \cdot z = z$. Therefore the complete set of the $n$ $n^{th}$ roots of $z$ is $\{z_0, \omega z_0, \omega^2 z_0, \dots, \omega^{n-1}z_0\}$ which are the vertices of a regular $n$-gon centered on the origin and "anchored" at $z_0\,$. That $n$-gon is just the regular $n$-gon of the $n^{th}$ roots of unity $\{1,\omega,\omega^2,\dots,\omega^{n-1}\}$ scaled by $|z_0|=\sqrt[n]{|z|}$ and rotated by $\arg(z_0)\,$.</p> Answer 59: <p>I am not sure what you mean by understanding it intuitively..<br> Consider $z = r(\cos\theta + i\sin\theta)$ then the roots are $$ z_k = r^{\frac{1}{n}}(\cos\frac{\theta + 2\pi k}{n} + i\sin\frac{\theta + 2\pi k}{n}),\ \ k = 0, \ 1,\ldots,\ n-1 $$</p> <p>This always gives $n$ different roots. Maybe this is what you meant that you understood it mathematically..</p> <p>Geometrically, when you multiply $2$ complex numbers, $z_1=r_1e^{i\theta_1},\ z_2=r_2e^{i\theta_2}$, you get $z=r_1r_2e^{i(\theta_1+\theta_2)}$<br> This effectively takes $z_1$ and multiplies its radius by that of $z_2$ and rotates it by the angle of $z_2$. </p> <p>Now going back to the roots $z_k$ of $z$, their radius is $r^{\frac{1}{n}}$, so multiplying them by themselves gives radius $r$, and their angle is $\frac{\theta + 2\pi k}{n}$ so rotating it $n$ times by the same angle gives $\theta + 2\pi k$</p> <p>Does that make sense?</p> Answer 60: <p>For complex numbers, when you say to take the "root", you are referring to all solutions to the following problem:</p> <p>$$x^n=z$$</p> <p>where $x$ is the $n$th root of $z$. As a simple case, consider $n=2$ and $z=1$. Then,</p> <p>$$1^2=(-1)^2=1$$</p> <p>$$x=\begin{cases}+1\\-1\end{cases}$$</p> <p>Similar such happen for higher $n$, and quite a symetric splitting manner.</p>
What do parallel lines look like on the pseudosphere?
2,087,009
7
https://math.stackexchange.com/questions/2087009/what-do-parallel-lines-look-like-on-the-pseudosphere
<p>I know that because the pseudosphere is a model for hyperbolic geometry, there are infinitely many lines that can be drawn parallel to another given one through an external point.</p> <p>However I'm struggling to figure out a pictorial representation for this.</p>
[ { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" }, { "answer_id": 2087031, "score": 3, "text": "<p>As an application of <a href=\"https://en.wikipedia.org/wiki/Clairaut&#39;s_relation\" rel=\"nofollow noreferrer\">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href=\"http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf\" rel=\"nofollow noreferrer\">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p>\n" }, { "answer_id": 2087029, "score": 2, "text": "<p>Any of the \"vertical\" lines in <a href=\"https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA\" rel=\"nofollow noreferrer\">this image of the pseudosphere</a> are parallel to each other.</p>\n" }, { "answer_id": 3851713, "score": 1, "text": "<p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of shown asymptotic lines:</p>\n<p><span class=\"math-container\">$$ \\dfrac{d^{2}\\psi}{ds^2} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R3\" /></a></p>\n<p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p>\n<p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p>\n" }, { "answer_id": 3851707, "score": 0, "text": "<p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p>\n<p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href=\"https://en.wikipedia.org/wiki/Hyperbolic_geometry\" rel=\"nofollow noreferrer\">Wiki Hyp. Geometry</a></p>\n<p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p>\n<p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p>\n<p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class=\"math-container\">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class=\"math-container\">$2 \\psi$</span> between pair of lines\n<span class=\"math-container\">$$ \\psi^{''} = K \\sin \\psi \\cos \\psi $$</span></p>\n<p><a href=\"https://i.stack.imgur.com/Iruei.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Iruei.png\" alt=\"Hyp Geodesics in R^3\" /></a></p>\n<p>Imho geodesic lines defined by Clairaut's Constant <span class=\"math-container\">$( r \\sin \\psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>\n" } ]
Question: What do parallel lines look like on the pseudosphere? Answer 1: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 2: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 3: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 4: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 5: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 6: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 7: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 8: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 9: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 10: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 11: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 12: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 13: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 14: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 15: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 16: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 17: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 18: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 19: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 20: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 21: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 22: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 23: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 24: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 25: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 26: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 27: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 28: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 29: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 30: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 31: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 32: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 33: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 34: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 35: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 36: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 37: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 38: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 39: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 40: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 41: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 42: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 43: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 44: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 45: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 46: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 47: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 48: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 49: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 50: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 51: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 52: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 53: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 54: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 55: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 56: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p> Answer 57: <p>As an application of <a href="https://en.wikipedia.org/wiki/Clairaut&#39;s_relation" rel="nofollow noreferrer">Clairaut's Relation</a>, you can work out quite explicitly what the geodesics on the pseudosphere are. The most useful form can be found in formula (6) on p. 258 of doCarmo's <em>Differential Geometry of Curves and Surfaces</em> or in exercise 23 on p. 78 of my <a href="http://faculty.franklin.uga.edu/shifrin/sites/faculty.franklin.uga.edu.shifrin/files/ShifrinDiffGeo.pdf" rel="nofollow noreferrer">differential geometry text</a>. In particular, other than the meridians (the copies of the tractrix), we have curves that turn around the pseudosphere from the fat end to a parallel circle, hitting it tangentially. Choosing constants carefully, you can see that infinitely many of these will pass through a given point and miss a fixed meridian.</p> Answer 58: <p>Any of the "vertical" lines in <a href="https://www.google.com/imgres?imgurl=http://mathworld.wolfram.com/images/eps-gif/Pseudosphere_700.gif&amp;imgrefurl=http://mathworld.wolfram.com/Pseudosphere.html&amp;h=258&amp;w=131&amp;tbnid=KWv2jGWuzFR3HM:&amp;vet=1&amp;tbnh=160&amp;tbnw=80&amp;docid=EvpETpSEg0FAcM&amp;client=safari&amp;usg=__-SdjCcoQ_yfTn5XSatRdbC27IOw=&amp;sa=X&amp;ved=0ahUKEwim9qvDlK_RAhWH2YMKHSS4DwgQ9QEIIDAA" rel="nofollow noreferrer">this image of the pseudosphere</a> are parallel to each other.</p> Answer 59: <p>For pictorial representation: Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane is given here from the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of shown asymptotic lines:</p> <p><span class="math-container">$$ \dfrac{d^{2}\psi}{ds^2} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R3" /></a></p> <p>These two <em>unique hyperbolic parallels</em> can be drawn on the central Beltrami and cone type Pseudospheres, constant negative K Kuen surfaces, Breather surfaces also.</p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute unique parallel line as in case of hyperbolic geometry.</p> <p>They are a <em>set of many</em> geodesics in <em>elliptic</em> geometry classified by Bernhard Riemann.</p> Answer 60: <p>Sketching hyperbolic geometry geodesic parallel lines as I understand them.</p> <p>Two red asymptotic lines through P parallel to a blue line R from among infinitely many parallels are shown here again from <a href="https://en.wikipedia.org/wiki/Hyperbolic_geometry" rel="nofollow noreferrer">Wiki Hyp. Geometry</a></p> <p>The situation in the plane can be seen here in the Poincaré disk model. From out of many parallel lines a pair among them passing through P is marked red and either of them is parallel to to the blue line R. In the plane also Poincaré model also such red line pairs parallel to a common parallel (not drawn) can be viewed.</p> <p>In 3 space also I have shown a pair of red asymptotic lines through P parallel to the axis of symmetry R of a pseudosphere R.</p> <p>Beltrami had shown and validated asymptotic lines of constant Gauss curvature <span class="math-container">$K$</span> pseudosphere as geodesic parallel lines (infinitely many parallel) pairs in three dimensional hyperbolic geometry. The parallel lines constitute a Chebyshev Net defined by Sine-Gordon equation with angle <span class="math-container">$2 \psi$</span> between pair of lines <span class="math-container">$$ \psi^{''} = K \sin \psi \cos \psi $$</span></p> <p><a href="https://i.stack.imgur.com/Iruei.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iruei.png" alt="Hyp Geodesics in R^3" /></a></p> <p>Imho geodesic lines defined by Clairaut's Constant <span class="math-container">$( r \sin \psi$</span>) on surfaces of revolution <em>do not</em> constitute parallel line sets in hyperbolic geometry. They are geodesics in the other <em>elliptic</em> geometry categorized by Riemann.</p>
How can I calculate $\lim\limits_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\frac14\right)$?
2,087,111
7
https://math.stackexchange.com/questions/2087111/how-can-i-calculate-lim-limits-x-to-inftyx-left-int-0x-te-2t-dt-frac1
<p>I tried</p> <p>$$\displaystyle \int_0^x te^{-2t}\,dt$$</p> <p>Let $u=t \implies du=dt$ And $dv=e^{-2t}\,dt \implies v=-\dfrac{1}{2}e^{-2t}$</p> <p>$$\displaystyle \int_0^x te^{-2t}\,dt=-\dfrac{1}{2}te^{-2t}\bigg|_0^x+\int_0^x \dfrac{1}{2}e^{-2t}\,dt$$ $$=-\dfrac{1}{2}xe^{-2x}-\dfrac{1}{4}e^{-2x}+\dfrac{1}{4}$$ $$\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)$$ $$=\displaystyle \lim_{x\to\infty}x\left(-\dfrac{1}{2}xe^{-2x}-\dfrac{1}{4}e^{-2x}+\dfrac{1}{4}-\dfrac{1}{4}\right)$$ $$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4}xe^{-2x}(2x+1)$$ $$=\displaystyle \lim_{x\to\infty}-\dfrac{x(2x+1)}{4e^{2x}}$$ $$=\displaystyle \lim_{x\to\infty}-\dfrac{2x^2+x}{4e^{2x}}$$</p>
[ { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" }, { "answer_id": 2087117, "score": 5, "text": "<p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4x+1}{8e^{2x}}$</p>\n\n<p>Indeterminate form again, applying L’ Hospital’s Rule yields</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{4}{16e^{2x}}$</p>\n\n<p>$=\\displaystyle \\lim_{x\\to\\infty}-\\dfrac{1}{4e^{2x}}$</p>\n\n<p>$=0$</p>\n\n<p>Hence\n$\\displaystyle \\lim_{x\\to\\infty}x\\left(\\int_0^x te^{-2t}\\,dt-\\dfrac{1}{4}\\right)=0$</p>\n" }, { "answer_id": 2087506, "score": 5, "text": "<p>You can avoid computing the integral by writing\n$$\\lim_{x\\to \\infty}x\\left (\\int_0^xte^{-2t}dt-\\frac{1}{4}\\right)=\\lim_{x\\to \\infty}\\frac{\\int_0^xte^{-2t}dt-\\frac{1}{4}}{\\frac 1x}$$\nApplying L'Hospital's rule, if the limit exists the second limit is equal to\n$$\\lim_{x\\to \\infty}\\frac{xe^{-2x}}{-\\frac{1}{x^2}}=\\lim_{x\\to \\infty}-\\frac{x^3}{e^{2x}}$$\nwhich is $0$ by using L'Hospital three times.</p>\n" }, { "answer_id": 2088049, "score": 4, "text": "<p>$$\\int_{0}^{x}t e^{-2t}\\,dt -\\frac{1}{4} = -\\int_{x}^{+\\infty}t e^{-2t}\\,dt=-e^{-2x}\\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt $$\nand\n$$ 0\\leq \\int_{0}^{+\\infty}(t+x)e^{-2t}\\,dt = x^2\\int_{0}^{+\\infty}(t+1)e^{-2t x}\\,dt\\leq Cx^2 $$\nhence the limit is trivially zero by squeezing.</p>\n" } ]
Question: How can I calculate $\lim\limits_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\frac14\right)$? Answer 1: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 2: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 3: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 4: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 5: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 6: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 7: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 8: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 9: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 10: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 11: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 12: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 13: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 14: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 15: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 16: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 17: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 18: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 19: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 20: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 21: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 22: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 23: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 24: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 25: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 26: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 27: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 28: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 29: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 30: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 31: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 32: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 33: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 34: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 35: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 36: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 37: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 38: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 39: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 40: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 41: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 42: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p> Answer 43: <p>Applying L’ Hospital’s rule to avoid the indeterminate form due to direct substitution, we have</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4x+1}{8e^{2x}}$</p> <p>Indeterminate form again, applying L’ Hospital’s Rule yields</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{4}{16e^{2x}}$</p> <p>$=\displaystyle \lim_{x\to\infty}-\dfrac{1}{4e^{2x}}$</p> <p>$=0$</p> <p>Hence $\displaystyle \lim_{x\to\infty}x\left(\int_0^x te^{-2t}\,dt-\dfrac{1}{4}\right)=0$</p> Answer 44: <p>You can avoid computing the integral by writing $$\lim_{x\to \infty}x\left (\int_0^xte^{-2t}dt-\frac{1}{4}\right)=\lim_{x\to \infty}\frac{\int_0^xte^{-2t}dt-\frac{1}{4}}{\frac 1x}$$ Applying L'Hospital's rule, if the limit exists the second limit is equal to $$\lim_{x\to \infty}\frac{xe^{-2x}}{-\frac{1}{x^2}}=\lim_{x\to \infty}-\frac{x^3}{e^{2x}}$$ which is $0$ by using L'Hospital three times.</p> Answer 45: <p>$$\int_{0}^{x}t e^{-2t}\,dt -\frac{1}{4} = -\int_{x}^{+\infty}t e^{-2t}\,dt=-e^{-2x}\int_{0}^{+\infty}(t+x)e^{-2t}\,dt $$ and $$ 0\leq \int_{0}^{+\infty}(t+x)e^{-2t}\,dt = x^2\int_{0}^{+\infty}(t+1)e^{-2t x}\,dt\leq Cx^2 $$ hence the limit is trivially zero by squeezing.</p>
Expanding an isomorphism between subgroups of $G$ to an automorphism on $G$
2,088,012
7
https://math.stackexchange.com/questions/2088012/expanding-an-isomorphism-between-subgroups-of-g-to-an-automorphism-on-g
<h2>Question</h2> <p>Let $G$ be some finite group, $H_1, H_2 \leq G$ and $\phi:H_1 \mapsto H_2$ be a group isomorphism. Is it always possible to expand $\phi$ to an automorphism on $G$?</p> <p>Explicitly, is there an automorphism $\sigma \in Aut_G$ s.t $\sigma \vert _{H_1}=\phi$?</p> <h2>Efforts so far</h2> <p>At first I considered defining $\sigma$ by letting $$\sigma\left(g\right)=\begin{cases} \phi\left(g\right) &amp; g\in H_{1}\\ \phi^{-1}\left(g\right) &amp; g\in H_{2}\\ g &amp; else \end{cases}$$ but obviously such a $\sigma$ is not well-defined for elements in $H_1 \cap H_2$ (other than $e$). I then tried $$\sigma\left(g\right)=\begin{cases} \phi\left(g\right) &amp; g\in H_{1}\setminus H_{2}\\ \phi^{-1}\left(g\right) &amp; g\in H_{2}\setminus H_{1}\\ g &amp; else \end{cases}$$ but then I'm unable to prove injectivity, for example when $x\in H_{1}\setminus H_{2},y\in H_{1}\cap H_{2}$ we get $\sigma\left(x\right)=\phi\left(x\right),\sigma\left(y\right)=y$ so it is unclear why $\sigma\left(x\right)\neq\sigma\left(y\right)$.</p> <p>Right now I'm questioning the truthfulness of the claim. But perhaps it's just my constructive proof that's wrong?</p>
[ { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" }, { "answer_id": 2088028, "score": 8, "text": "<p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p>\n\n<p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p>\n\n<p>The conjecture also fails in abelian groups. E.g., take $G = \\Bbb{Z}_2 \\times \\Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\\phi : H_1 \\to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p>\n" }, { "answer_id": 2088062, "score": 3, "text": "<p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>\n" } ]
Question: Expanding an isomorphism between subgroups of $G$ to an automorphism on $G$ Answer 1: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 2: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 3: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 4: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 5: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 6: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 7: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 8: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 9: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 10: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 11: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 12: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 13: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 14: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 15: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 16: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 17: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 18: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 19: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 20: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 21: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 22: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 23: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 24: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 25: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 26: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 27: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 28: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p> Answer 29: <p>No: e.g., if $H_1$ is a subgroup of the centre of $G$ and $H_2$ is not, then there can be no automorphism of $G$ mapping $H_1$ to $H_2$.</p> <p>Thanks to Gerry Myerson and SteamyRoot for supplying specific examples.</p> <p>The conjecture also fails in abelian groups. E.g., take $G = \Bbb{Z}_2 \times \Bbb{Z}_4$ and take $H_1$ and $H_2$ to be the subgroups generated by $x = (0, 2)$ and $y = (1, 0)$ respectively. Then the isomorphism $\phi : H_1 \to H_2$ that maps $x$ to $y$ cannot be extended to $G$ because $x$ is divisible by $2$ but $y$ is not.</p> Answer 30: <p>It's actually kind of hard to get even an automorphism of a subgroup that extends to an automorphism of the entire group (and when that is possible, there are usually many ways to do the extension part). There are some pretty strong conditions that are necessary for what you're asking about (isomorphism of subgroups extending to automorphism of whole group) to even be possible. First, there has to be an automorphism of $G$ which maps the subgroup $H_1$ to $H_2$, which requires that the subgroup lattice 'looks the same' (up to isomorphism types) from the perspective of either subgroup (giving a technical description of what 'looks the same' actually is, is a bit difficult and tedious, but if you look at enough reasonably small subgroup lattices, you should be able to intuit what all has to happen). Moreover, the 'looks the same' condition carries over to the subgroups of $H_1$ and their corresponding subgroups for $H_2$ (which limits which isomorphisms between them will work when it even is possible to do this).</p>
Solving differential equation with two variables
2,088,292
7
https://math.stackexchange.com/questions/2088292/solving-differential-equation-with-two-variables
<p>How to solve a differential equation of the following form:</p> <p>$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 $</p> <p>Where, $f(x_0,y_0)=c$. </p>
[ { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" }, { "answer_id": 2088431, "score": 2, "text": "<p>Using the definition of <a href=\"https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation\" rel=\"nofollow noreferrer\">implicit differentiation</a> for $f(x,y)$ one can write\n$$\\frac{dy}{dx}=-\\frac{f_x}{f_y}$$\nAnd from your equation:\n$$\\frac{a x}{f_x}=-\\frac{b y+f_y}{f_y}$$\nTherefore:\n$$\\begin{align}\n\\frac{dy}{dx}=\\frac{a x}{b y+f_y}&amp;\\implies\\int a x\\,dx=\\int(b y+f_y)dy\\\\\n&amp;\\implies\\dfrac 12 ax^2+C=\\dfrac 12 by^2+f(x,y)\\\\\n&amp;\\implies f(x,y)=\\dfrac 12 ax^2-\\dfrac 12 by^2+C\n\\end{align}$$\nUpdate:<br>\nThe boundary condition is $f(x_0,y_0)=c$ which means\n$$\\dfrac 12 (ax_0^2-by_0^2)+C=c\\implies C=c-\\dfrac 12 (ax_0^2-by_0^2)$$\nand finally\n$$f(x,y)=\\dfrac 12 a(x^2-x_0^2)-\\dfrac 12 b(y^2-y_0^2)+c$$\nBut it seems the answer must be $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p>\n" }, { "answer_id": 2088512, "score": 2, "text": "<p>$$\\dfrac{ax}{\\dfrac{\\partial{f(x,y)}}{\\partial x}} + \\dfrac{by}{\\dfrac{\\partial{f(x,y)}}{\\partial y}}=-1 \\quad \\text{ with condition} \\quad f(x_0,y_0)=c$$\nThis is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p>\n\n<p>For example, the method of separation of variables leads to :\n$$f(x,y)=\\frac{a(x^2-x_0^2)}{2\\lambda}-\\frac{b(y^2-y_0^2)}{2(\\lambda+1)}+c$$\nwhich is a family of solutions of the PDE fitting with the given condition.</p>\n\n<p>This is true any constant $\\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p>\n\n<p>Note : The solution $\\frac a2(x^2-x_0^2)-\\frac b{\\color{red}{4}}(y^2-y_0^2)+c$ cited by \"polfosol\" is the particular solution in case of $\\lambda=1$.</p>\n" } ]
Question: Solving differential equation with two variables Answer 1: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 2: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 3: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 4: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 5: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 6: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 7: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 8: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 9: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 10: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 11: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 12: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 13: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 14: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 15: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 16: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 17: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 18: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 19: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 20: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 21: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 22: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 23: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 24: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 25: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 26: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 27: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 28: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p> Answer 29: <p>Using the definition of <a href="https://en.wikipedia.org/wiki/Implicit_function#Implicit_differentiation" rel="nofollow noreferrer">implicit differentiation</a> for $f(x,y)$ one can write $$\frac{dy}{dx}=-\frac{f_x}{f_y}$$ And from your equation: $$\frac{a x}{f_x}=-\frac{b y+f_y}{f_y}$$ Therefore: $$\begin{align} \frac{dy}{dx}=\frac{a x}{b y+f_y}&amp;\implies\int a x\,dx=\int(b y+f_y)dy\\ &amp;\implies\dfrac 12 ax^2+C=\dfrac 12 by^2+f(x,y)\\ &amp;\implies f(x,y)=\dfrac 12 ax^2-\dfrac 12 by^2+C \end{align}$$ Update:<br> The boundary condition is $f(x_0,y_0)=c$ which means $$\dfrac 12 (ax_0^2-by_0^2)+C=c\implies C=c-\dfrac 12 (ax_0^2-by_0^2)$$ and finally $$f(x,y)=\dfrac 12 a(x^2-x_0^2)-\dfrac 12 b(y^2-y_0^2)+c$$ But it seems the answer must be $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$. Don't know why I ended up with that.</p> Answer 30: <p>$$\dfrac{ax}{\dfrac{\partial{f(x,y)}}{\partial x}} + \dfrac{by}{\dfrac{\partial{f(x,y)}}{\partial y}}=-1 \quad \text{ with condition} \quad f(x_0,y_0)=c$$ This is a non-linear PDE : The given condition isn't sufficient to determine a unique solution.</p> <p>For example, the method of separation of variables leads to : $$f(x,y)=\frac{a(x^2-x_0^2)}{2\lambda}-\frac{b(y^2-y_0^2)}{2(\lambda+1)}+c$$ which is a family of solutions of the PDE fitting with the given condition.</p> <p>This is true any constant $\lambda$, except $0$ and $-1$. So, they are an infinity of solutions.</p> <p>Note : The solution $\frac a2(x^2-x_0^2)-\frac b{\color{red}{4}}(y^2-y_0^2)+c$ cited by "polfosol" is the particular solution in case of $\lambda=1$.</p>
Sum of $n$ terms of the given series.
2,088,797
7
https://math.stackexchange.com/questions/2088797/sum-of-n-terms-of-the-given-series
<p>Find the sum of $n$ terms of the series:</p> <p>$$\frac{1}{x+1}+\frac{2x}{(x+1)(x+2)}+\frac{3x^2}{(x+1)(x+2)(x+3)}+\frac{4x^3}{(x+1)(x+2)(x+3)(x+4)}+.......$$</p> <p>Could someone give me slight hint to proceed in this question?</p>
[ { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" }, { "answer_id": 2088813, "score": 7, "text": "<p><strong>Hint</strong>. One may observe that\n$$\n\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=\\left(1-\\frac{x^n \\Gamma(x+1)}{\\Gamma(x+n+1)}\\right)-\\left(1-\\frac{x^{n-1} \\Gamma(x+1)}{\\Gamma(x+n)}\\right)\n$$ giving\n$$\n\\sum_{n=1}^N\\frac{nx^{n-1}}{\\prod _{k=1}^n (x+k)}=1-\\frac{x^N \\Gamma(x+1)}{\\Gamma(x+N+1)}\n$$</p>\n" }, { "answer_id": 2088846, "score": 3, "text": "<p>Recalling that <a href=\"https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension\" rel=\"nofollow noreferrer\">the lower incomplete gamma function</a> has the power series $$\\gamma\\left(s,x\\right)=x^{s-1}\\Gamma\\left(s\\right)e^{-x}\\sum_{k\\geq1}\\frac{x^{k}}{\\Gamma\\left(s+k\\right)}\\tag{1}$$ we have $$\\frac{d}{dx}\\left(\\frac{\\gamma\\left(s,x\\right)}{x^{s-1}\\Gamma\\left(s\\right)e^{-x}}\\right)=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}\n $$ hence $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(s+k\\right)}=\\frac{1+x^{-s}e^{x}\\left(x-1-s\\right)\\gamma\\left(s,x\\right)}{\\Gamma\\left(s\\right)}$$ so taking $s=x+1$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\Gamma\\left(x+1+k\\right)}=\\frac{1}{\\Gamma\\left(x+1\\right)}\\tag{2}\n $$ and now recalling that the <a href=\"http://mathworld.wolfram.com/PochhammerSymbol.html\" rel=\"nofollow noreferrer\">Pochhammer symbol</a> can be written as $$\\left(x+1\\right)\\cdots\\left(x+n\\right)=\\left(x+1\\right)_{n}=\\frac{\\Gamma\\left(x+1+n\\right)}{\\Gamma\\left(x+1\\right)}\\tag{3}$$ we get $$\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=1.\n $$ Then $$\\sum_{k=1}^{N}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}=\\sum_{k\\geq1}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}-\\sum_{k=N+1}^{\\infty}\\frac{kx^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k\\right)}\n $$ $$=1-x^{N}\\sum_{k\\geq1}\\frac{\\left(k+N\\right)x^{k-1}}{\\left(x+1\\right)\\cdots\\left(x+k+N\\right)}=1-\\frac{x^{N}\\Gamma\\left(x+1\\right)}{\\Gamma\\left(x+N+1\\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>\n" } ]
Question: Sum of $n$ terms of the given series. Answer 1: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 2: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 3: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 4: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 5: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 6: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 7: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 8: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 9: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 10: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 11: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 12: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 13: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 14: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 15: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 16: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 17: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 18: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 19: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 20: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 21: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 22: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 23: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 24: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 25: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 26: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 27: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 28: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p> Answer 29: <p><strong>Hint</strong>. One may observe that $$ \frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=\left(1-\frac{x^n \Gamma(x+1)}{\Gamma(x+n+1)}\right)-\left(1-\frac{x^{n-1} \Gamma(x+1)}{\Gamma(x+n)}\right) $$ giving $$ \sum_{n=1}^N\frac{nx^{n-1}}{\prod _{k=1}^n (x+k)}=1-\frac{x^N \Gamma(x+1)}{\Gamma(x+N+1)} $$</p> Answer 30: <p>Recalling that <a href="https://en.wikipedia.org/wiki/Incomplete_gamma_function#Holomorphic_extension" rel="nofollow noreferrer">the lower incomplete gamma function</a> has the power series $$\gamma\left(s,x\right)=x^{s-1}\Gamma\left(s\right)e^{-x}\sum_{k\geq1}\frac{x^{k}}{\Gamma\left(s+k\right)}\tag{1}$$ we have $$\frac{d}{dx}\left(\frac{\gamma\left(s,x\right)}{x^{s-1}\Gamma\left(s\right)e^{-x}}\right)=\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)} $$ hence $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(s+k\right)}=\frac{1+x^{-s}e^{x}\left(x-1-s\right)\gamma\left(s,x\right)}{\Gamma\left(s\right)}$$ so taking $s=x+1$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\Gamma\left(x+1+k\right)}=\frac{1}{\Gamma\left(x+1\right)}\tag{2} $$ and now recalling that the <a href="http://mathworld.wolfram.com/PochhammerSymbol.html" rel="nofollow noreferrer">Pochhammer symbol</a> can be written as $$\left(x+1\right)\cdots\left(x+n\right)=\left(x+1\right)_{n}=\frac{\Gamma\left(x+1+n\right)}{\Gamma\left(x+1\right)}\tag{3}$$ we get $$\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=1. $$ Then $$\sum_{k=1}^{N}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}=\sum_{k\geq1}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)}-\sum_{k=N+1}^{\infty}\frac{kx^{k-1}}{\left(x+1\right)\cdots\left(x+k\right)} $$ $$=1-x^{N}\sum_{k\geq1}\frac{\left(k+N\right)x^{k-1}}{\left(x+1\right)\cdots\left(x+k+N\right)}=1-\frac{x^{N}\Gamma\left(x+1\right)}{\Gamma\left(x+N+1\right)}$$ using $(1)$, $(2)$ and $(3)$.</p>
Solutions for $a^2+b^2+c^2=d^2$
2,088,960
7
https://math.stackexchange.com/questions/2088960/solutions-for-a2b2c2-d2
<p>I have to find an infinite set for $a,b,c,d$ such that $a^2+b^2+c^2=d^2$ and $a,b,c,d\in\mathbb N$</p> <p>I have found one possible set which is $x,2x,2x,3x$ and $x\in\mathbb N$</p> <p>But I want another infinite sets and how to reach them because I have found my solution by hit and trial .</p>
[ { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" }, { "answer_id": 2089009, "score": 10, "text": "<p>There is a parametrization of all primitive soutions, $\\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then\n$$ a = m^2 + n^2 - p^2 - q^2, $$\n$$ b = 2 (mq+np), $$\n$$ c = 2(nq -mp), $$\n$$ d = m^2 + n^2 + p^2 + q^2 $$\nsatisfy\n$$ a^2 + b^2 + c^2 = d^2. $$\nThis is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p>\n\n<p>gp-pari</p>\n\n<pre><code>? a = m^2 + n^2 - p^2 - q^2\n%6 = m^2 + (n^2 + (-p^2 - q^2))\n? b = 2 *( m * q + n * p)\n%7 = 2*q*m + 2*p*n\n? c = 2 * ( n * q - m * p)\n%8 = -2*p*m + 2*q*n\n? d = m^2 + n^2 + p^2 + q^2\n%9 = m^2 + (n^2 + (p^2 + q^2))\n? a^2 + b^2 + c^2 - d^2\n%10 = 0\n? \n</code></pre>\n" }, { "answer_id": 2089023, "score": 10, "text": "<p>I'll give one obvious example that I found, and that is</p>\n\n<hr>\n\n<p><em>If</em>\n$$p^2+q^2=2rs\\tag1$$\n<em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\\tag2$$</p>\n\n<hr>\n\n<p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\\tag3$$</p>\n\n<hr>\n\n<p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula.\n$$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\\tag4$$\nFor arbitrary $m,n,p,q\\in\\mathbb{Z}$.</p>\n" }, { "answer_id": 2088979, "score": 6, "text": "<p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \\gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\\frac 12(a^2+b^2-1),\\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \\cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \\cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p>\n" }, { "answer_id": 2089029, "score": 6, "text": "<p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\\to n^2+1$, as $$\\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br>\nAlso there is another similar identity which looks rather simple, $$\\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\\,\\,\\,\\,\\forall n\\in\\Bbb{N}.$$</p>\n" }, { "answer_id": 2089030, "score": 6, "text": "<p><strong>Neater solution. (not obvious where I got formula from)</strong></p>\n\n<p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p>\n\n<p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p>\n\n<p>$$\\bigg(b^2-a^2,2ab,2cd,c^2+d^2\\bigg)$$</p>\n\n<p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p>\n\n<p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p>\n\n<p>$(3,4,12,13)$ becomes $(24,12,313)$.</p>\n\n<p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p>\n\n<p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p>\n\n<p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p>\n\n<p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p>\n\n<p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p>\n\n<p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p>\n\n<p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>\n" } ]
Question: Solutions for $a^2+b^2+c^2=d^2$ Answer 1: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 2: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 3: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 4: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 5: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 6: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 7: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 8: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 9: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 10: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 11: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 12: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 13: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 14: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 15: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 16: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 17: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 18: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 19: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 20: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 21: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 22: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 23: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 24: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 25: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 26: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 27: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 28: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 29: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 30: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 31: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 32: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 33: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 34: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 35: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 36: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 37: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 38: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 39: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 40: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 41: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 42: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 43: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 44: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 45: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 46: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 47: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 48: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 49: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 50: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 51: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 52: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 53: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 54: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 55: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 56: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 57: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 58: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 59: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 60: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 61: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 62: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 63: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 64: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 65: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 66: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 67: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 68: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 69: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 70: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p> Answer 71: <p>There is a parametrization of all primitive soutions, $\gcd(a,b,c,d)=1.$ Take any four integers $m,n,p,q$ with $\gcd(m,n,p,q)=1$ and $m+n+p+q$ odd. Those are the only rules. Then $$ a = m^2 + n^2 - p^2 - q^2, $$ $$ b = 2 (mq+np), $$ $$ c = 2(nq -mp), $$ $$ d = m^2 + n^2 + p^2 + q^2 $$ satisfy $$ a^2 + b^2 + c^2 = d^2. $$ This is simply quaternion multiplication. It seems likely that the formula was known to Euler, who wrote out the multiplication formula for sums of four squares, which amounts to quaternion multiplication. However, people mostly mention this item as due to the number theorist V. A. Lebesgue in the mid 1800's. Apparently the first correct proof that this gives all primitive solutions was Dickson in 1920, so I am not sure what Lebesgue did.</p> <p>gp-pari</p> <pre><code>? a = m^2 + n^2 - p^2 - q^2 %6 = m^2 + (n^2 + (-p^2 - q^2)) ? b = 2 *( m * q + n * p) %7 = 2*q*m + 2*p*n ? c = 2 * ( n * q - m * p) %8 = -2*p*m + 2*q*n ? d = m^2 + n^2 + p^2 + q^2 %9 = m^2 + (n^2 + (p^2 + q^2)) ? a^2 + b^2 + c^2 - d^2 %10 = 0 ? </code></pre> Answer 72: <p>I'll give one obvious example that I found, and that is</p> <hr> <p><em>If</em> $$p^2+q^2=2rs\tag1$$ <em>Then</em>$$(p+q)^2+(p-q)^2+(r-s)^2=(r+s)^2\tag2$$</p> <hr> <p>For example, if $(p,q,r,s)=(4,2,5,2)$, then$$6^2+2^2+3^2=7^2\tag3$$</p> <hr> <p>Interestingly enough, Euler did the same work for these sums of squares and he gave an explicit formula. $$(m^2+n^2-p^2-q^2)^2+4(mq+np)^2+4(nq-mp)^2=(m^2+n^2+p^2+q^2)^2\tag4$$ For arbitrary $m,n,p,q\in\mathbb{Z}$.</p> Answer 73: <p>As you noticed, given a solution you can multiply it by any natural greater than $1$ to get another solution, so we can concentrate on solutions that do not have a common divisor. You can just keep finding solutions by hand. A more systematic approach is to write $a^2+b^2=d^2-c^2=(d+c)(d-c)$ The two terms on the right are either both odd or both even. You can just choose $a,b$ with one odd and one even and $a \gt b$, then factor $a^2+b^2$. So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$ and a new solution is $4,1,8,9$. You can always use the factor with $1$ giving $a,b,\frac 12(a^2+b^2-1),\frac12(a^2+b^2+1)$ as a fundamental solution whenever $a,b$ are of opposite parity. If $a^2+b^2$ is not prime or the square of a prime there will be other solutions. So for $a=8, b=1$ we have $a^2+b^2=65=5 \cdot 13$, and we find $8,1,4,9$ as a solution.So $a=4,b=1$ gives $d+c=17, d-c=1$ and you can solve the equations to get $d=9,c=8$. Unfortunately here it give a permutation of the solution we already have. If we choose $a=9,b=2$ we get $a^2+b^2=85=5 \cdot 17$ and find the solution $9,2,6,11$ as well as $9,2,42,43$</p> Answer 74: <p>It is well known that the identity $(2n)^2+(n^2-1)^2=(n^2+1)^2$ produces a class of Pythagorean Triples. We can apply the same algorithm twice by replacing $n\to n^2+1$, as $$\color{Green}{(4n)^2+(2(n^2-1))^2+((n^2+1)^2-1)^2=((n^2+1)^2+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$ In fact this solution has an interesting property which does not holds for others :)<br> Also there is another similar identity which looks rather simple, $$\color{red}{n^2+(n+1)^2+(n^2+n)^2=(n^2+n+1)^2}\,\,\,\,\forall n\in\Bbb{N}.$$</p> Answer 75: <p><strong>Neater solution. (not obvious where I got formula from)</strong></p> <p>Note that: $(b^2-a^2)^2+(2ab)^2+(2cd)^2-(c^2+d^2)^2=(a^2+b^2+c^2-d^2)(a^2+b^2-c^2+d^2)$.</p> <p>So if you have a solution $(a,b,c,d)$ to $a^2+b^2+c^2-d^2=0$ then another solution is:</p> <p>$$\bigg(b^2-a^2,2ab,2cd,c^2+d^2\bigg)$$</p> <p>Applying this to your solution $(1,2,2,3)$ gives $(3,4,12,13)$.</p> <p>Note the order of terms could be juggled around for further applications of this. The only requirement is that $b&gt;a$.</p> <p>$(3,4,12,13)$ becomes $(24,12,313)$.</p> <p>$(3,12,4,13)$ becomes $(35,72,104,185)$.</p> <p>$(4,12,3,13)$ becomes $(128,96,78,178)$.</p> <p><strong>Ugly Solution. (more obvious where formula comes from)</strong></p> <p>Pythagoras's theorem is a related problem only involving three terms: $a^2+b^2=c^2$. It has a fundamental solution of $x^2-y^2, 2xy, x^2+y^2$. We can built upon this.</p> <p>If we tackle your problem $a^2+b^2+c^2=d^2$ then we can start with $a^2+b^2=(x^2-y^2)^2, c=2xy, d=x^2+y^2$. Note that solving $a^2+b^2=(x^2-y^2)^2$ is another case of Pythagoras: $a=k^2-l^2, b=2kl, x^2-y^2=k^2+l^2$. Note that $x^2-y^2=k^2+l^2$ is the same as your original equation to which you have a solution so we can build another from there: $x=3,k=2,l=1,y=2$ (Note we need $k&gt;l$).</p> <p>So with $k=2,l=1,y=2,x=3$ then we get $a=3,b=4,c=12,d=13$ giving $3^2+4^2+12^2=13^2$.</p> <p>So in general if you have a solution $(a,b,c,d)$ (with $a&lt;b$) you can form a new solution: $(b^2-a^2,2ab,2cd,c^2+d^2)$.</p>
How do you define sequences that converge to infinity?
2,089,169
7
https://math.stackexchange.com/questions/2089169/how-do-you-define-sequences-that-converge-to-infinity
<p>For instance consider the sequence $\{1,0,2,0,3,0,4,0,..\}$ Intuitively we know that the sequence converges to $\infty$ but how do we check that rigorously. If I imitate the formal definition of convergence then I believe that we can at best come up with something like this: </p> <p>$(x_n)\to\infty$ if for any $\epsilon&gt;0$ there exists $N\in\mathbb{N}$ such that for $n\geq N$ we have $x_n&gt;\epsilon.$</p> <p>Now this definiton does help us in proving the convergence of some sequences such as $x_n=\sqrt{n}$ because in this case we can let $N\geq \epsilon^2.$ However this definition fails to show that the aforementioned sequence $\{1,0,2,0,3,0,4,0,..\}$ converges to $\infty$. I am thus guessing that "there exists" a better definition out there. So please suggest me some references or maybe provide me with a definition that is able to take care of convergence to $\infty$ in general. </p>
[ { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" }, { "answer_id": 2089184, "score": 10, "text": "<p>The general idea behind limits converging to $\\infty$ are:</p>\n\n<ul>\n<li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li>\n<li>$+\\infty$ and $-\\infty$ are best understood as points in the extended real numbers </li>\n</ul>\n\n<p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\\infty$ in the extended real numbers.</p>\n\n<hr>\n\n<p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\\ldots$.</p>\n\n<hr>\n\n<p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name \"converging to $\\infty$\". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\\infty$ as a limit point, or alternatively, something satisfying the property</p>\n\n<blockquote>\n <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p>\n</blockquote>\n" }, { "answer_id": 2089176, "score": 3, "text": "<p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\\geq N$.</p>\n\n<p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently \"big\" after a point. (instead of asking all numbers be sufficiently \"close\" to the limit after a point).</p>\n" }, { "answer_id": 2089180, "score": 2, "text": "<p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\\left((-1)^n\\right)$, which diverges but not to infinity.</p>\n\n<p>The formal definition you give for divergence to infinity is essentially correct.</p>\n" }, { "answer_id": 2089675, "score": 0, "text": "<p>Your definition is fine (even if \"converge to infinity\" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\\{0,0,0,...\\}$ (which does not tend to infinity).</p>\n\n<p>The limit superior $\\left(\\limsup_{n \\rightarrow \\infty} x_n = \\lim_{n \\rightarrow \\infty} \\sup_{m \\ge n} x_n\\right)$ is infinite, but that is not sufficient for a claim of \"tends to infinity\". As Will R pointed out in a comment, the sequence $\\{ 1,1,1,{1\\over 2},1,{1\\over 3},1,{1\\over 4},...\\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of \"tends to infinity\") property of the sequence is that it is not bounded from above $\\left( \\forall M \\ \\exists n \\mid x_n \\gt M \\right)$ (as mentioned by Hurkyl)</p>\n\n<p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\\left( \\lim_{n \\rightarrow \\infty} x_n = \\infty \\Leftrightarrow \\forall k \\ \\exists N \\mid n \\gt N \\Rightarrow x_n \\gt k\\right)$ is \"every subsequence exceeds all bounds\".</p>\n\n<p>Sources: currently studying Computer Science</p>\n" } ]
Question: How do you define sequences that converge to infinity? Answer 1: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 2: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 3: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 4: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 5: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 6: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 7: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 8: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 9: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 10: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 11: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 12: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 13: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 14: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 15: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 16: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 17: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 18: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 19: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 20: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 21: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 22: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 23: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 24: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 25: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 26: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 27: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 28: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 29: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 30: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 31: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 32: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 33: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 34: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 35: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 36: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 37: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 38: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 39: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 40: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 41: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 42: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 43: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 44: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 45: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 46: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 47: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 48: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 49: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 50: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 51: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 52: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 53: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 54: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 55: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 56: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p> Answer 57: <p>The general idea behind limits converging to $\infty$ are:</p> <ul> <li>There is a topological notion of limit in terms of open sets or open neighborhoods rather than in terms of distance</li> <li>$+\infty$ and $-\infty$ are best understood as points in the extended real numbers </li> </ul> <p>The definition you cited is, in fact, equivalent to what it means for a real-valued sequence to converge to $+\infty$ in the extended real numbers.</p> <hr> <p>The problem is that the sequence you consider <em>doesn't converge</em>. In the extended real numbers, it is a <em>divergent</em> sequence with two limit points: $0$ and $+\infty$. In that regard, it's comparable to the sequence $0,1,0,1,0,1,\ldots$.</p> <hr> <p>Without more examples or attempts at elaboration, I'm not sure what idea you have that you have incorrectly given the name "converging to $\infty$". One possibility is that you simply have in mind the idea of a sequence being <em>unbounded above</em>. This is a sequence having $+\infty$ as a limit point, or alternatively, something satisfying the property</p> <blockquote> <p>For all real $M$, there exists some $n$ such that $x_n &gt; M$</p> </blockquote> Answer 58: <p>The usual definition is that for every real number $x$ there is a positive integer $N$ such that $a_m&gt;x$ for all $m\geq N$.</p> <p>It is very similar to the usual definition of convergence, instead that we ask that all numbers be sufficiently "big" after a point. (instead of asking all numbers be sufficiently "close" to the limit after a point).</p> Answer 59: <p>We say that the sequence $(x_n)$ <em>diverges</em> to infinity. This is therefore different from a sequence like $\left((-1)^n\right)$, which diverges but not to infinity.</p> <p>The formal definition you give for divergence to infinity is essentially correct.</p> Answer 60: <p>Your definition is fine (even if "converge to infinity" is a sub-optimal choice of words since convergence to a finite limit implies many useful properties that tending to infinity does not), it is the sequence that does not converge. If a sequence converges (or tends to infinity), then every subsequence should converge to the same limit (or tend to infinity). This does not hold for the subsequence $\{0,0,0,...\}$ (which does not tend to infinity).</p> <p>The limit superior $\left(\limsup_{n \rightarrow \infty} x_n = \lim_{n \rightarrow \infty} \sup_{m \ge n} x_n\right)$ is infinite, but that is not sufficient for a claim of "tends to infinity". As Will R pointed out in a comment, the sequence $\{ 1,1,1,{1\over 2},1,{1\over 3},1,{1\over 4},...\}$ does not converge to zero despite its limit inferior being zero. Another (necessary but insufficient for a claim of "tends to infinity") property of the sequence is that it is not bounded from above $\left( \forall M \ \exists n \mid x_n \gt M \right)$ (as mentioned by Hurkyl)</p> <p>A possibly useful (although not perfect) way of thinking about the definition of tending to infinity $\left( \lim_{n \rightarrow \infty} x_n = \infty \Leftrightarrow \forall k \ \exists N \mid n \gt N \Rightarrow x_n \gt k\right)$ is "every subsequence exceeds all bounds".</p> <p>Sources: currently studying Computer Science</p>
Compactness of the open and closed unit intervals
2,089,317
7
https://math.stackexchange.com/questions/2089317/compactness-of-the-open-and-closed-unit-intervals
<p>In the <a href="http://www.math.ucla.edu/~tao/preprints/compactness.pdf" rel="nofollow noreferrer">article</a> by Tao it's explained that the compactness can be formulated in the most general way as:</p> <p>(All open covers have finite subcovers) If $`V_\alpha:\alpha\in\mathcal{a}`$ is any collection of open sets which covers $X$, then there must exist a finite sub-collection $V_{\alpha_1},V_{\alpha_2}...V_{\alpha_k}$ of these sets which still cover $X$.</p> <p><strong>Question:</strong> How is it possible to show (both intuitively and rigorously) that such coverage by a finite collection of open sets is possible for $X=[0,1]$ and not possible for $X=(0,1)$?</p> <p><strong>Edit:</strong> a <a href="https://math.stackexchange.com/questions/1548137/compactness-of-closed-interval?rq=1">related question</a> and <a href="https://math.stackexchange.com/questions/609217/why-is-an-open-interval-not-a-compact-set?rq=1">another one</a> on open covers.</p>
[ { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" }, { "answer_id": 2089349, "score": 7, "text": "<p>Take the closed interval $[0, 1]$, and let $\\mathcal{V} = \\{ V_1 , V_2 , \\ldots \\}$ be an open cover. We want to say that $\\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \\cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\\{ x \\}$. We know there exists $n$ such that $x \\in V_n$, and since $V_n$ is open, we know there's some $\\epsilon &gt; 0$ such that the $\\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \\epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\\mathcal{V}$, but a singleton cover, namely $\\{ V_n \\} \\subset \\mathcal {V}$, a contradiction.</p>\n\n<p>EDIT: A similar method can be used to show the cube $[0, 1]^n \\subset \\mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p>\n" }, { "answer_id": 2089323, "score": 6, "text": "<p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\\frac{1}{n})$.</p>\n\n<p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href=\"https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact\">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p>\n" }, { "answer_id": 2089608, "score": 1, "text": "<p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>\n" } ]
Question: Compactness of the open and closed unit intervals Answer 1: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 2: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 3: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 4: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 5: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 6: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 7: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 8: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 9: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 10: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 11: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 12: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 13: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 14: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 15: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 16: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 17: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 18: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 19: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 20: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 21: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 22: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 23: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 24: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 25: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 26: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 27: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 28: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 29: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 30: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 31: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 32: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 33: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 34: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 35: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 36: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 37: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 38: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 39: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 40: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 41: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 42: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p> Answer 43: <p>Take the closed interval $[0, 1]$, and let $\mathcal{V} = \{ V_1 , V_2 , \ldots \}$ be an open cover. We want to say that $\mathcal{V}$ has a finite subcover, so suppose to the contrary it doesn't. Then we can express $[0, 1]$ as $[0, 1/2] \cup [1/2, 1]$. If both $[0, 1/2]$ and $[1/2, 1]$ admit finite subcovers from $\mathcal{V}$, then so does $[0, 1]$, so we know (at least) one of these intervals doesn't. We'll then perform the same process on that interval (if it was $[0, 1/2]$, then we'll look at $[0, 1/4]$ and $[1/4, 1/2]$). This yields a sequence $(I_N)$ of nested closed intervals whose intersection is a singleton $\{ x \}$. We know there exists $n$ such that $x \in V_n$, and since $V_n$ is open, we know there's some $\epsilon &gt; 0$ such that the $\epsilon$-ball about $x$ is contained in $V_n$. But if we choose $N$ large enough that $2^{- N} &lt; \epsilon$, we know that the $N$th closed interval $I_N$ is contained in $V_n$. So in fact $I_N$ not only admitted a finite subcover from $\mathcal{V}$, but a singleton cover, namely $\{ V_n \} \subset \mathcal {V}$, a contradiction.</p> <p>EDIT: A similar method can be used to show the cube $[0, 1]^n \subset \mathbb{R}^n$. Another modification yet will yield the most general form of the Heine-Borel theorem: That a subset of a complete metric space is compact if and only if it's closed and totally bounded.</p> Answer 44: <p>showing it is not possible for $(0,1)$ is easy, just take the open cover $(0,1-\frac{1}{n})$.</p> <p>Showing it is possible for $[0,1]$ is not so easy, you may want to look <a href="https://math.stackexchange.com/questions/368108/how-to-prove-every-closed-interval-in-r-is-compact">here</a> for proofs that do not go through the whole Heine-Borel theorem.</p> Answer 45: <p>I think the most rigorous way of showing this more generally is using the Heine-Borel theorem, which says that a subset of $\mathbb{R}^n$ is compact if and only if it is closed and bounded. Maybe this is less intuitive but if you understand the proof of the theorem it gives you a large insight. </p>
Dealing cards, derangements, and probability: Is the Riddler Express solution incorrect?
2,089,548
7
https://math.stackexchange.com/questions/2089548/dealing-cards-derangements-and-probability-is-the-riddler-express-solution-in
<p><strong>Edit.</strong> The error was noted in a subsequent <a href="https://fivethirtyeight.com/features/how-long-will-it-take-to-blow-out-the-birthday-candles/" rel="nofollow noreferrer"><strong>Riddler entry</strong></a> with a reference to an earlier MSE post:</p> <p><a href="https://i.stack.imgur.com/5EEBI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EEBI.png" alt="enter image description here" /></a></p> <hr /> <p>The most recent <a href="http://fivethirtyeight.com/features/dont-throw-out-that-calendar/" rel="nofollow noreferrer"><strong>Riddler Express</strong></a> says:</p> <p><a href="https://i.stack.imgur.com/afhnB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/afhnB.png" alt="enter image description here" /></a></p> <p>But suppose you had this same question with only one suit (say, hearts): What is the probability that you get through the 13 hearts <em>without</em> what you say ever matching what you deal?</p> <p>It seems to me that the method above would suggest computing <span class="math-container">$(12/13)^{13} = 0.3532\ldots$</span></p> <p>This cannot be right, though; the modified question is essentially asking about <a href="https://en.wikipedia.org/wiki/Derangement" rel="nofollow noreferrer"><strong>derangements</strong></a> for a set of <span class="math-container">$13$</span> elements. Checking the derangement sequence at <a href="https://oeis.org/A000166" rel="nofollow noreferrer"><strong>OEIS</strong></a> yields as its thirteenth term <span class="math-container">$2290792932$</span>; among the <span class="math-container">$13!$</span> combinations, this would yield a probability of winning the modified game equal to <span class="math-container">$2290792932/13! = 0.3678\ldots$</span></p> <p>The numbers are not <em>very</em> different, but they are certainly different.</p> <p><strong>Back to the main question:</strong></p> <blockquote> <p>Is the solution for the &quot;Riddler Express&quot; above incorrect? [I strongly believe it is...]</p> <p><strong>If so:</strong> What is the correct solution, and how does one show this?</p> </blockquote> <p>I had thought it would require a modification of the derangement formula using the Inclusion-Exclusion Principle or some such thing (cf. <a href="https://mathoverflow.net/a/21019"><strong>this MO answer</strong></a> from Richard Stanley). In fact, I computed it as <span class="math-container">$0.01623\ldots$</span> Again, this is &quot;about 1.6 percent&quot; (as in the solution write-up) but the methods are very different and the actual numerical results are <em>not</em> the same.</p> <p>Am I overthinking? Is the Riddler underthinking? What gives?</p>
[ { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" }, { "answer_id": 2089604, "score": 6, "text": "<p>The value of $P(win) = \\frac{\\int_0^{\\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \\approx 1.6233\\%$\n is given <a href=\"https://math.stackexchange.com/questions/1340355/solitaire-probability\">here</a> with citations.</p>\n" }, { "answer_id": 2089554, "score": 4, "text": "<p>Yes, the \"Riddler Express\" solution is not correct, but it is a reasonably good approximation.</p>\n" }, { "answer_id": 2089572, "score": 3, "text": "<p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p>\n\n<p>$$\\frac12 \\times \\frac47 \\times \\frac12 \\times \\frac35 \\times \\frac12 \\times \\frac23 \\times \\frac12 =\\frac{1}{70}$$</p>\n\n<p>But according to the Riddler's flaky logic, it would be\n$$\\left(\\frac12\\right)^8=\\frac{1}{256}$$</p>\n" } ]
Question: Dealing cards, derangements, and probability: Is the Riddler Express solution incorrect? Answer 1: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 2: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 3: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 4: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 5: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 6: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 7: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 8: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 9: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 10: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 11: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 12: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 13: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 14: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 15: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 16: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 17: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 18: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 19: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 20: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 21: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 22: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 23: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 24: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 25: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 26: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 27: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 28: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 29: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 30: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 31: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 32: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 33: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 34: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 35: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 36: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 37: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 38: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 39: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 40: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 41: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 42: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p> Answer 43: <p>The value of $P(win) = \frac{\int_0^{\infty}L_4^{13}(x)e^{-x}dx}{52!/(4!)^{13}} \approx 1.6233\%$ is given <a href="https://math.stackexchange.com/questions/1340355/solitaire-probability">here</a> with citations.</p> Answer 44: <p>Yes, the "Riddler Express" solution is not correct, but it is a reasonably good approximation.</p> Answer 45: <p>You can see that it is not correct by looking at the simpler case of an eight-card deck with A2 in each suit. To get zero matches, you need precisely 2A2A2A2A; the chance of this is exactly</p> <p>$$\frac12 \times \frac47 \times \frac12 \times \frac35 \times \frac12 \times \frac23 \times \frac12 =\frac{1}{70}$$</p> <p>But according to the Riddler's flaky logic, it would be $$\left(\frac12\right)^8=\frac{1}{256}$$</p>
How do you convince someone that parallel lines can touch/meet?
2,089,649
7
https://math.stackexchange.com/questions/2089649/how-do-you-convince-someone-that-parallel-lines-can-touch-meet
<p>When talking to someone who knows basic mathematics but not really in-depth, how would you explain that parallel lines can touch?</p> <p>I am referring to Non-Euclidian/ Projective Geometry.</p> <p>Edit: Why do parallel lines in the Euclidean plane (correspond with lines that) cross in the projective plane?</p>
[ { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" }, { "answer_id": 2089676, "score": 24, "text": "<p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p>\n\n<p>You might be thinking about Projective Geometry where a \"point at infinity\" is added to every family of parallel lines and the set of all points at infinity is called the \"line at infinity\".</p>\n\n<p>Addendum</p>\n\n<hr>\n\n<p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their \"baseline\" behavior. The postulate that every geometry seems to agree on is the one that states </p>\n\n<p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p>\n\n<p>The \"dual\" of that postulate is</p>\n\n<p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p>\n\n<p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, \"Does there exists a geometry that satisfies the postulates of Projective geometry?\" Yes there does.</p>\n\n<p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \\cup \\{p\\}$.</p>\n\n<p>Adding this point to those lines means that those lines are no longer parallel.\nDefine the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p>\n\n<p>In the other direction, pick any line, $l$, in the projective plane, $\\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\\mathbb E^2 \\cong \\mathbb P^2 - \\{l\\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p>\n" }, { "answer_id": 2089655, "score": 13, "text": "<p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p>\n\n<p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p>\n" }, { "answer_id": 2089925, "score": 10, "text": "<p>As I said in comments, you can't convince someone with something which is not true.</p>\n<p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p>\n<h1>Parallel lines do not meet at a point.</h1>\n<p><a href=\"https://en.wikipedia.org/wiki/Parallel_(geometry)\" rel=\"nofollow noreferrer\">This</a> section of Wikipedia worth a lot here:</p>\n<blockquote>\n<p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p>\n</blockquote>\n<p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href=\"https://math.stackexchange.com/users/401699/justin-bieber\">Justin Bieber</a>/<a href=\"https://math.stackexchange.com/users/143816/selena-gomez\">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p>\n" }, { "answer_id": 2089827, "score": 7, "text": "<p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p>\n\n<p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the \"top of the glass\"), but all the lines meet at the edge of the glass.</p>\n\n<p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p>\n\n<p>&nbsp;</p>\n\n<p>Note, by the way, this might require you to think about the glass as infinite.</p>\n" }, { "answer_id": 2089683, "score": 6, "text": "<p>In his comment @JackyChong has identified the preliminary problem. The definition of \"parallel\" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of \"line\".</p>\n\n<p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p>\n\n<p>For projective geometry one definition is to add a \"point at infinity\" on each line, and then make the added points into a \"line at infinity\". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p>\n\n<p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the \"someone who knows no mathematics\" you can show him or her the <a href=\"https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model\" rel=\"nofollow noreferrer\">Poincare model</a>.</p>\n" }, { "answer_id": 2089656, "score": 5, "text": "<p>I would use two meridians on Earth that touch at the poles.</p>\n" }, { "answer_id": 2089933, "score": 5, "text": "<p>This is a matter of point of view :)</p>\n\n<p><a href=\"https://i.stack.imgur.com/lGePi.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/lGePi.png\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 2089660, "score": 4, "text": "<p>You can actually speak of trains. Rail ways are pretty suggestive.</p>\n" }, { "answer_id": 2089976, "score": 4, "text": "<p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p>\n" }, { "answer_id": 2103114, "score": 3, "text": "<p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href=\"https://www.google.co.in/#q=parallel+lines+definition\" rel=\"nofollow noreferrer\"><strong>parallel lines</strong>[see the topmost definition]</a></p>\n\n<p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p>\n\n<p><a href=\"https://www.youtube.com/watch?v=BKorP55Aqvg\" rel=\"nofollow noreferrer\"><strong>See the video here</strong> also</a></p>\n" }, { "answer_id": 2089712, "score": 2, "text": "<p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p>\n" }, { "answer_id": 2089986, "score": 2, "text": "<p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p>\n\n<blockquote>\n <p>Parallel lines are lines that intersect at infinity.</p>\n</blockquote>\n\n<p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p>\n\n<h2>In art</h2>\n\n<p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href=\"https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg\" rel=\"nofollow noreferrer\">Wikimedia Commons</a> for the source,\n<a href=\"https://i.stack.imgur.com/cnNkW.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/cnNkW.jpg\" alt=\"Source: Wikimedia Commons\"></a><br>\nThe above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href=\"https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro\" rel=\"nofollow noreferrer\">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p>\n\n<h2>In geometry</h2>\n\n<p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p>\n\n<p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called \"points at infinity\", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href=\"https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem\" rel=\"nofollow noreferrer\">Pappus's hexagon theorem</a>.</p>\n\n<h2>In conclusion</h2>\n\n<p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p>\n" }, { "answer_id": 2089964, "score": 1, "text": "<p>In Elliptic geometry the supposition is false.</p>\n\n<p>In Euclidean geometry parallel lines \"meet\" and touch at infinity as their slope is same.</p>\n\n<p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p>\n\n<p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the \"horizon\" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p>\n\n<p><a href=\"http://mathworld.wolfram.com/PoincareHyperbolicDisk.html\" rel=\"nofollow noreferrer\"> Poincare Disk_MathWorld </a></p>\n\n<p>So also in the half-plane model semi-circular geodesics meet at point at $\\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">PoincareHalfPlane</a></p>\n\n<p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p>\n\n<p><a href=\"http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm\" rel=\"nofollow noreferrer\">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p>\n" }, { "answer_id": 2103316, "score": 1, "text": "<p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p>\n\n<p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the \"picture plane\". Now any <em>line through the origin</em> in $\\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p>\n\n<p><a href=\"https://i.stack.imgur.com/Dod2D.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/Dod2D.png\" alt=\"enter image description here\"></a>\n<a href=\"https://i.stack.imgur.com/KRnSK.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/KRnSK.png\" alt=\"enter image description here\"></a></p>\n\n<p>Now we have a dictionary:\n$$\\textrm{point in picture plane} \\longleftrightarrow \\textrm{line through origin in 3-space}$$\n$$\\textrm{line in picture plane} \\longleftrightarrow \\textrm{plane through origin in 3-space}$$</p>\n\n<p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered \"at infinity\", but really all this means is that they do not have images in the picture plane.</p>\n\n<p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get \"higher\" or \"farther away\" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger \"disappears\". It's not really gone, of course, but the \"line of sight\" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded \"infinitely far away\" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p>\n\n<p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was \"at infinity\" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p>\n\n<p>Note that a statement like</p>\n\n<blockquote>\n <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p>\n</blockquote>\n\n<p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p>\n\n<blockquote>\n <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p>\n</blockquote>\n\n<p>which is a true statement about 3-space.</p>\n\n<p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are \"parallel\" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p>\n\n<p><a href=\"https://i.stack.imgur.com/J3BxN.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/J3BxN.png\" alt=\"enter image description here\"></a></p>\n\n<p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\\mathbb{R}^3$ corresponds to a point \"at infinity\".</p>\n\n<p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p>\n\n<p><a href=\"https://i.stack.imgur.com/iRBlv.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iRBlv.png\" alt=\"enter image description here\"></a></p>\n\n<p>In summary:</p>\n\n<ul>\n<li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a \"point\" is just the image of a <em>line through the origin</em> and a \"line\" is just the image of a <em>plane through the origin</em></li>\n<li>Some \"points\" (and the \"line\" containing that point) will have no image in the picture plane, which we describe by saying that they are \"at infinity\"</li>\n<li>However being \"at infinity\" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li>\n<li>Furthermore whether two \"lines\" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li>\n<li>The connection between the last two bullets is that \"lines\" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point \"at infinity\" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li>\n<li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li>\n</ul>\n" } ]
Question: How do you convince someone that parallel lines can touch/meet? Answer 1: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 2: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 3: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 4: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 5: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 6: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 7: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 8: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 9: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 10: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 11: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 12: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 13: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 14: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 15: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 16: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 17: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 18: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 19: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 20: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 21: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 22: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 23: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 24: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 25: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 26: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 27: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 28: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 29: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 30: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 31: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 32: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 33: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 34: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 35: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 36: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 37: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 38: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 39: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 40: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 41: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 42: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 43: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 44: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 45: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 46: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 47: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 48: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 49: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 50: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 51: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 52: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 53: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 54: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 55: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 56: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 57: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 58: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 59: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 60: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 61: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 62: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 63: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 64: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 65: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 66: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 67: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 68: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 69: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 70: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 71: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 72: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 73: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 74: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 75: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 76: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 77: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 78: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 79: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 80: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 81: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 82: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 83: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 84: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 85: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 86: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 87: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 88: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 89: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 90: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 91: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 92: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 93: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 94: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 95: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 96: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 97: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 98: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 99: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 100: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 101: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 102: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 103: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 104: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 105: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 106: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 107: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 108: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 109: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 110: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 111: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 112: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 113: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 114: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 115: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 116: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 117: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 118: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 119: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 120: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 121: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 122: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 123: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 124: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 125: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 126: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 127: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 128: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 129: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 130: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 131: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 132: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 133: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 134: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 135: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 136: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 137: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 138: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 139: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 140: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 141: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 142: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 143: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 144: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 145: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 146: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 147: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 148: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 149: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 150: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 151: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 152: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 153: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 154: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 155: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 156: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 157: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 158: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 159: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 160: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 161: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 162: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 163: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 164: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 165: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 166: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 167: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 168: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 169: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 170: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 171: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 172: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 173: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 174: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 175: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 176: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 177: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 178: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 179: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 180: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 181: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 182: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul> Answer 183: <p>Lines are parallel if they lie in the same plane and they don't intersect. In other geometries, there may be no parallel lines, lines may not have a common point but they may have a common limit point at infinity, or they may just not intersect.</p> <p>You might be thinking about Projective Geometry where a "point at infinity" is added to every family of parallel lines and the set of all points at infinity is called the "line at infinity".</p> <p>Addendum</p> <hr> <p>Typically, in formal geometry, points, lines, and planes are not defined. But postulates define their "baseline" behavior. The postulate that every geometry seems to agree on is the one that states </p> <p>L1: Given two distinct points in a plane, there is exactly one line in that plane that contains them.</p> <p>The "dual" of that postulate is</p> <p>DL1: Given two distinct lines in a plane, there is exactly one point in that plane that belongs to both lines.</p> <p>Since Euclidean geometry contains parallel lines, DL2 is false. But Projective geometry accepts DL2 as a postulate. The big question is, "Does there exists a geometry that satisfies the postulates of Projective geometry?" Yes there does.</p> <p>The creation of such a geometry is really quite clever. You start with a Euclidean plane and you add points to it as follows. Pick any line in the plane. To that line and all lines parallel to it, you add one extra point, a point at infinity. This is a set thing. We are treating a Euclidean line, $l$, as a set of points and we are adding a non Euclidean point $p$ to that set, $l' = l \cup \{p\}$.</p> <p>Adding this point to those lines means that those lines are no longer parallel. Define the set of all points at infinity to be the line at infinity. The Projective plane is the Euclidian plane with all points at infinity and the line at infinity added to it. This particular Projective plane can be proved to satisfy DL2.</p> <p>In the other direction, pick any line, $l$, in the projective plane, $\mathbb P^2$, and remove it. What you end up with is the Euclidean plane, $\mathbb E^2 \cong \mathbb P^2 - \{l\}$. Some lines will still intersect. Those that intersected at a point on the line that was removed will now be parallel.</p> Answer 184: <p>I would say if we stand next to each other and both walk due north, we eventually bump into each other near the north pole</p> <p>Note, as several have commented, your premise is slightly off. What you can convince your friend of is that any two 'straight lines' i.e. geodesics cross on a sphere (not parallel lines), whereas on a plane, for any line there is a special family of lines that never touch it.</p> Answer 185: <p>As I said in comments, you can't convince someone with something which is not true.</p> <p>IMO if someone knows basic geometry then he he must be aware of the fact that :</p> <h1>Parallel lines do not meet at a point.</h1> <p><a href="https://en.wikipedia.org/wiki/Parallel_(geometry)" rel="nofollow noreferrer">This</a> section of Wikipedia worth a lot here:</p> <blockquote> <p>In geometry, parallel lines are lines in a plane which do not meet; that is, two lines in a plane that do not intersect or touch each other at any point are said to be parallel. By extension, a line and a plane, or two planes, in three-dimensional Euclidean space that do not share a point are said to be parallel.</p> </blockquote> <p>Though if you are convincing/arguing with someone naive in field of geometry that Parallel line do not meet, ask him why people call him by his/her name and not by <a href="https://math.stackexchange.com/users/401699/justin-bieber">Justin Bieber</a>/<a href="https://math.stackexchange.com/users/143816/selena-gomez">Selena Gomez</a>? The expected answer would be that <em>Because it is my name</em> and that's it. You got him/her. Actually parallel lines cannot meet at a point or intersect because they are defined that way, if two lines will intersect then they will not remain parallel lines.</p> Answer 186: <p>When I was 16 or so I was bored trying to solve my math homework, so I played with a magnifying glass, and I noticed something interesting:</p> <p>If you look at the grid paper with a magnifying glass the lines remain parallel (this is evident when they are more or less on the "top of the glass"), but all the lines meet at the edge of the glass.</p> <p>(I went to my older brother, who was an engineering freshman at the time, and I told him that parallel lines <em>can</em> meet; but he replied that they cannot because it's an axiom. Some years later I learned that there is a thing called non-Euclidean geometry.)</p> <p>&nbsp;</p> <p>Note, by the way, this might require you to think about the glass as infinite.</p> Answer 187: <p>In his comment @JackyChong has identified the preliminary problem. The definition of "parallel" is clear: lines that don't meet, so there are no parallel lines that meet. The real question is the definition of "line".</p> <p>Geodesic is the most natural for geometry on the sphere. Then in this geometry there are no parallel lines.</p> <p>For projective geometry one definition is to add a "point at infinity" on each line, and then make the added points into a "line at infinity". With these extra points and lines there are no parallel lines. Two that are parallel in the Euclidean plane share their point at infinity. The railroad track analogy helps here.</p> <p>In hyperbolic geometry there are multiple lines parallel to a given line through a point not on that line. If you get that far with the "someone who knows no mathematics" you can show him or her the <a href="https://en.wikipedia.org/wiki/Poincar%C3%A9_disk_model" rel="nofollow noreferrer">Poincare model</a>.</p> Answer 188: <p>I would use two meridians on Earth that touch at the poles.</p> Answer 189: <p>This is a matter of point of view :)</p> <p><a href="https://i.stack.imgur.com/lGePi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGePi.png" alt="enter image description here"></a></p> Answer 190: <p>You can actually speak of trains. Rail ways are pretty suggestive.</p> Answer 191: <p>You cannot convince any mathematician let alone your friend as it is false. They can however grapple with the idea that for purely theoretical calculations we assume that 2 parallel lines can meet at a point in infinity. Should you wish to travel to infinity to prove this point then please send a postcard when you get there.</p> Answer 192: <p>Someone who knows basic mathematics can understand it easily since according to the definition of parallel lines<br> <strong>Parallel lines:</strong> The lines those distance is constant are called <a href="https://www.google.co.in/#q=parallel+lines+definition" rel="nofollow noreferrer"><strong>parallel lines</strong>[see the topmost definition]</a></p> <p>Now, as we know the definition the coinciding lines are also parallel since the distance between them is constant or $0$. Hence, as coinciding lines touch each other implies parallel lines also touch. </p> <p><a href="https://www.youtube.com/watch?v=BKorP55Aqvg" rel="nofollow noreferrer"><strong>See the video here</strong> also</a></p> Answer 193: <p>I would draw two parallel lines on a piece of paper. Then I would bring all the edges together into one point and I would show that the parallel lines would touch into that point.</p> Answer 194: <p>A sentence said to me by my mother when I was a kid and that I didn't understand at that time was:</p> <blockquote> <p>Parallel lines are lines that intersect at infinity.</p> </blockquote> <p>This statement and its explanation though art (as suggested by Pedro Tamaroff in the comments) and by showing how many statements and proofs are simplified (if the person knows more math) is a good introduction to the idea that thinking on parallel lines as lines meeting at infinity makes sense and can be useful.</p> <h2>In art</h2> <p>This, as you can also see in Yves Daoust's answer, can be seen whenever you look a straight road going to the horizon. For example, see <a href="https://commons.wikimedia.org/wiki/File:Straight_Road,_B17_-_geograph.org.uk_-_232427.jpg" rel="nofollow noreferrer">Wikimedia Commons</a> for the source, <a href="https://i.stack.imgur.com/cnNkW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cnNkW.jpg" alt="Source: Wikimedia Commons"></a><br> The above image shows the reason why this conception was in the beginning a concept originated in art as it was really useful for representations of how we see things. An striking application of this can be seen in the <a href="https://en.wikipedia.org/wiki/Santa_Maria_presso_San_Satiro" rel="nofollow noreferrer">Santa Maria presso San Satiro</a>, where projective geometry was used emulating an absent space in a church. However, there are many other examples.</p> <h2>In geometry</h2> <p>In usual geometry (i.e. affine and Euclidean geometry), the above definition or statement does not make sense as parallel lines don't intersect. This can be the definition in the plane, but it is generally (in higher dimensions) a result derived from the definition of parallel lines as lines with the same direction.</p> <p>However, the reason why we can still make sense of the above statement about intersection at infinity is because affine geometry can be put inside projective geometry. When doing this, the points outside the affine space are called "points at infinity", parallel lines intersect at them and become the same as intersecting lines simplifying enormously many statements and proofs by permitting one to not distinguish cases. An example can be <a href="https://en.wikipedia.org/wiki/Pappus%27s_hexagon_theorem" rel="nofollow noreferrer">Pappus's hexagon theorem</a>.</p> <h2>In conclusion</h2> <p>In conclusion, don't try to convince or show that parallel lines touch. Just try to explain the usefulness of thinking of parallel lines as lines that intersect at infinity. In the Renaissance, you have many examples of why this a useful statement from the point of view of representing reality and perspectives well; in mathematics, there are many examples of how this is really useful for simplifying statements and proofs in geometry.</p> Answer 195: <p>In Elliptic geometry the supposition is false.</p> <p>In Euclidean geometry parallel lines "meet" and touch at infinity as their slope is same.</p> <p>In flat Hyperbolic geometry parallel lines can also touch but only at at infinity.</p> <p>In the flat Poincare disk model circle segment geodesic parallels meet tangentially only at infinitely distant points on the boundary of the "horizon" or boundary circle. The lines can be seen to touch and move on the boundary seen in the Wiki link.</p> <p><a href="http://mathworld.wolfram.com/PoincareHyperbolicDisk.html" rel="nofollow noreferrer"> Poincare Disk_MathWorld </a></p> <p>So also in the half-plane model semi-circular geodesics meet at point at $\infty$, on the x-axis. An animation can be also found elsewhere semi-circles touch on x-axis while changing semi-circle size during movement.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">PoincareHalfPlane</a></p> <p>Both examples are touching parallel lines, touching tangentially /asymptotically at point infinity.</p> <p><a href="http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm" rel="nofollow noreferrer">http://math.etsu.edu/multicalc/prealpha/Chap3/Chap3-8/part4.htm</a></p> Answer 196: <p>One way to explain this is to think of the Euclidean plane as a picture of (part of) an ambient 3-dimensional world. This corresponds to how our vision actually works: light from the three-dimensional world is projected through the lens of an eyes onto the surface at the back of our eye.</p> <p>In more detail: Place a plane anywhere in three-dimensional space (as long as it doesn't pass through the origin). Call this the "picture plane". Now any <em>line through the origin</em> in $\mathbb{R}^3$ pierces the picture plane in exactly one point (as long as the line it is not parallel to the picture plane). Any <em>plane through the origin</em> pierces the picture plane in exactly one line (again, as long as the plane is not parallel to the picture plane). See below (in which the picture plane is shown in blue).</p> <p><a href="https://i.stack.imgur.com/Dod2D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dod2D.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/KRnSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KRnSK.png" alt="enter image description here"></a></p> <p>Now we have a dictionary: $$\textrm{point in picture plane} \longleftrightarrow \textrm{line through origin in 3-space}$$ $$\textrm{line in picture plane} \longleftrightarrow \textrm{plane through origin in 3-space}$$</p> <p>This dictionary is not quite 1-to-1, because there are lines (and one plane) through the origin in $\mathbb{R}^3$ that are parallel to the picture plane, and therefore do not correspond to any points or to a line in the picture plane. These are points and this line can be considered "at infinity", but really all this means is that they do not have images in the picture plane.</p> <p>You can experience this visually by holding your finger at arm's length in front of and a few inches above. As you draw your arm closer (while maintaining its height) the location of your fingertip seems to get "higher" or "farther away" in your field of view, as your eyes have to strain more and more to see it. Eventually when the fingertip is directly above the eye, the finger "disappears". It's not really gone, of course, but the "line of sight" from the fingertip to the lens of your eye is parallel to the retina, so that the fingertip cannot be seen. Perceptually, the fingertip has receded "infinitely far away" in the picture plane -- but it is actually just a couple of inches away in 3-space.</p> <p><strong>However</strong>, we all know what to do in that circumstance: turn your head! This corresponds to choosing a new picture plane with a different orientation. Now the point that was "at infinity" snaps into view and is revealed as an ordinary point in the new picture plane (and at the same time points that <em>were previously</em> in view have no disappeared).</p> <p>Note that a statement like</p> <blockquote> <p>given any two distinct points in the picture plane, there exists a unique line in the picture plane through the given points</p> </blockquote> <p>which describes a true fact about the geometry of the picture plane corresponds, via this dictionary, to</p> <blockquote> <p>given any two distinct lines through the origin in 3-space, there exists a unique plane through the origin containing both lines</p> </blockquote> <p>which is a true statement about 3-space.</p> <p>Now that we have this dictionary set up, let's think about what parallel lines in the picture plane are. As the image below shows, parallel lines in the picture plane correspond to two planes in $\mathbb{R}^3$. (More precisely, lines $f$ and $g$ are "parallel" in the blue picture plane, and correspond to the orange and yellow planes through the origin.)</p> <p><a href="https://i.stack.imgur.com/J3BxN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J3BxN.png" alt="enter image description here"></a></p> <p>Those planes are not parallel; they intersect, but the intersection is a line <em>that does not pierce the picture plane</em>. (In this illustration, the line of intersection happens to be the $y$-axis.) That line in $\mathbb{R}^3$ corresponds to a point "at infinity".</p> <p>Of course if you choose a different picture plane, then the lines will no longer appear parallel, and their intersection will be plainly visible as a point in the new picture plane. The final image below shows this. In that image, the orange and yellow planes are the same ones as before, but the blue picture plane has been moved and reoriented; the images of the two planes are now <em>non-parallel</em> lines.</p> <p><a href="https://i.stack.imgur.com/iRBlv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRBlv.png" alt="enter image description here"></a></p> <p>In summary:</p> <ul> <li>If the 2-dimensional plane is understood as an image of an ambient 3-dimensional space, then a "point" is just the image of a <em>line through the origin</em> and a "line" is just the image of a <em>plane through the origin</em></li> <li>Some "points" (and the "line" containing that point) will have no image in the picture plane, which we describe by saying that they are "at infinity"</li> <li>However being "at infinity" is an artifact of which picture plane we use; changing picture planes changes which points are at infinity</li> <li>Furthermore whether two "lines" are parallel is an artifact of the picture plane we use to view them; lines that seem parallel from one perspective are non-parallel from a different perspective</li> <li>The connection between the last two bullets is that "lines" that appear parallel in one picture plane <em>do</em> intersect, but the point of intersection is a point "at infinity" <em>relative to the picture plane</em>; change perspectives and the point is no longer at infinity, and the lines are no longer parallel.</li> <li>All of the above corresponds more-or-less exactly with how the human eye works, which is why perspective art can do such a good job at fooling the eye and creating the illusion of depth on a flat surface.</li> </ul>
Binomial expansion up to $x^{2}$
2,089,691
7
https://math.stackexchange.com/questions/2089691/binomial-expansion-up-to-x2
<p>The question is expand $(1+x+2ax^{2})^{b}$ in ascending power of $x$ up to the term in $x^{2}$.</p> <p>I did as following:</p> <p>$\bigg(1+\big(x+2ax^{2}\big)\bigg)^{b}=\left(^{b}_{0}\right)(1)^{b}+\left(^{b}_{1}\right)(1)^{b-1}(x+2ax^{2})+\left(^{b}_{2}\right)(1)^{b-2}(x+2ax^{2})^{2}+\dots$</p> <p>$\hspace{37.5 mm}=(1)(1)+(b)(1)(x+2ax^{2})+\left(^{b}_{2}\right)(1)(x^{2}+4ax^{3}+4a^{2}x^{4})+\dots$</p> <p>$\hspace{37.5 mm}=1+bx+2abx^{2}+\left(^{b}_{2}\right)x^{2}+\dots$</p> <p>$\hspace{37.5 mm}=1+bx+\big(2ab+\left(^{b}_{2}\right)\big)x^{2}+\dots$</p> <p>But the answer in my book has $2a$ not $2ab$.</p> <p>Where did I make a mistake?</p>
[ { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" }, { "answer_id": 2089700, "score": 4, "text": "<p>I think that\nyou are right\nand the book is wrong.</p>\n\n<p>For one thing,\nthe coefficients are\n<em>always</em> going to have\n$b$ as part of them.</p>\n\n<p>Discuss this with your instructor.\nThe other students may\nalso be worried.</p>\n" } ]
Question: Binomial expansion up to $x^{2}$ Answer 1: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 2: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 3: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 4: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 5: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 6: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 7: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 8: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 9: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 10: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 11: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 12: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 13: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p> Answer 14: <p>I think that you are right and the book is wrong.</p> <p>For one thing, the coefficients are <em>always</em> going to have $b$ as part of them.</p> <p>Discuss this with your instructor. The other students may also be worried.</p>